prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the SimCLR (ResNet-50) model in the A Simple Framework for Contrastive Learning of Visual Representations paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the SCAN (ResNet-50|Unsupervised) model in the SCAN: Learning to Classify Images without Labels paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the OBoW (ResNet-50) model in the OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the PCL (ResNet-50) model in the Prototypical Contrastive Learning of Unsupervised Representations paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the CPC model in the Representation Learning with Contrastive Predictive Coding paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the BigBiGAN (RevNet-50 ×4, BN+CReLU) model in the Large Scale Adversarial Representation Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the Rotation (joint training) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the Pseudolabeling model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the Exemplar (joint training) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the VAT + Entropy Minimization model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the Rotation model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the Exemplar model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the VAT model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the Instance Discrimination (ResNet-50) model in the Unsupervised Feature Learning via Non-Parametric Instance Discrimination paper on the ImageNet - 1% labeled data dataset?
Top 1 Accuracy, Top 5 Accuracy
What metrics were used to measure the BOSS model in the Building One-Shot Semi-supervised (BOSS) Learning up to Fully Supervised Performance paper on the cifar-10, 10 Labels dataset?
Accuracy (Test)
What metrics were used to measure the MutexMatch model in the MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization paper on the cifar-10, 10 Labels dataset?
Accuracy (Test)
What metrics were used to measure the Semi-MMDC model in the Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering paper on the cifar-10, 10 Labels dataset?
Accuracy (Test)
What metrics were used to measure the ShrinkMatch model in the Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the Semi-MMDC model in the Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the MutexMatch (k=0.6C) model in the MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the Triple-GAN-V2 (CNN-13) model in the Triple Generative Adversarial Networks paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the RealMix model in the RealMix: Towards Realistic Semi-Supervised Deep Learning Algorithms paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the Triple-GAN-V2 (CNN-13, no aug) model in the Triple Generative Adversarial Networks paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the Dual Student model in the Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the MeanTeacher model in the Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the SESEMI SSL (ConvNet) model in the Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the VAT model in the Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the Ⅱ-model model in the Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the MixUp model in the mixup: Beyond Empirical Risk Minimization paper on the SVHN, 250 Labels dataset?
Accuracy
What metrics were used to measure the CCSSL(FixMatch) model in the Class-Aware Contrastive Semi-Supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the FixMatch+DM model in the Harnessing Hard Mixed Samples with Decoupled Regularizer paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the SimMatch model in the SimMatch: Semi-supervised Learning with Similarity Matching paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the FixMatch+CR model in the Contrastive Regularization for Semi-Supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the NP-Match model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the SMPL (WRN-28-8) model in the Self Meta Pseudo Labels: Meta Pseudo Labels Without The Teacher paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the FreeMatch model in the FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the SimPLE (WRN-28-8) model in the SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised Classification paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the FlexMatch model in the FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the Dash (RA, WRN-28-8) model in the Dash: Semi-Supervised Learning with Dynamic Thresholding paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the LaplaceNet (WRN-28-8) model in the LaplaceNet: A Hybrid Graph-Energy Neural Network for Deep Semi-Supervised Classification paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the DP-SSL model in the DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the FixMatch (RA, WRN-28-8) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the EnAET (WRN-28-2-Large) model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the CowMix (WRN-28-96x2d) model in the Milking CowMask for Semi-Supervised Image Classification paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the FixMatch (CTA, WRN-28-8) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the LiDAM model in the LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the SemCo (μ=7) model in the All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the SHOT-VAE model in the SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the EnAET (WRN-28-2) model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the UPS (CNN-13) model in the In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the Dual Student (480) model in the Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the R2-D2 (CNN-13) model in the Repetitive Reprediction Deep Decipher for Semi-Supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the Temporal ensembling model in the Temporal Ensembling for Semi-Supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the SESEMI SSL (ConvNet) model in the Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the Ⅱ-Model model in the Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning paper on the cifar-100, 10000 Labels dataset?
Percentage error
What metrics were used to measure the FeatMatch model in the FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning paper on the Mini-ImageNet, 10000 Labels dataset?
Accuracy
What metrics were used to measure the SemCo (μ=3) model in the All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training paper on the Mini-ImageNet, 10000 Labels dataset?
Accuracy
What metrics were used to measure the SemCo (μ=7) model in the All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training paper on the Mini-ImageNet, 10000 Labels dataset?
Accuracy
What metrics were used to measure the RelationMatch model in the RelationMatch: Matching In-batch Relationships for Semi-supervised Learning paper on the STL-10, 40 Labels dataset?
Accuracy
What metrics were used to measure the ShrinkMatch model in the Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning paper on the STL-10, 40 Labels dataset?
Accuracy
What metrics were used to measure the NP-Match model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the STL-10, 40 Labels dataset?
Accuracy
What metrics were used to measure the DebiasPL (ResNet-50) model in the Debiased Learning from Naturally Imbalanced Pseudo-Labels paper on the ImageNet - 0.2% labeled data dataset?
ImageNet Top-1 Accuracy
What metrics were used to measure the FixMatch w/ EMAN (ResNet-50) model in the Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning paper on the ImageNet - 0.2% labeled data dataset?
ImageNet Top-1 Accuracy
What metrics were used to measure the CLIP (ResNet-50) model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ImageNet - 0.2% labeled data dataset?
ImageNet Top-1 Accuracy
What metrics were used to measure the Dash (RA) model in the Dash: Semi-Supervised Learning with Dynamic Thresholding paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the DebiasPL (w/ FixMatch) model in the Debiased Learning from Naturally Imbalanced Pseudo-Labels paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the ShrinkMatch model in the Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the FixMatch+DM model in the Harnessing Hard Mixed Samples with Decoupled Regularizer paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the DP-SSL model in the DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the FlexMatch model in the FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the SimMatch model in the SimMatch: Semi-supervised Learning with Similarity Matching paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the SelfMatch model in the SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the NP-Match model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the FreeMatch model in the FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the FixMatch+CR model in the Contrastive Regularization for Semi-Supervised Learning paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the FixMatch (CTA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the Semi-MMDC model in the Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the ReMixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the EnAET model in the RealMix: Towards Realistic Semi-Supervised Deep Learning Algorithms paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the RealMix model in the RealMix: Towards Realistic Semi-Supervised Deep Learning Algorithms paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the LiDAM model in the LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the VAT model in the Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the MeanTeacher model in the Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the MixUp model in the mixup: Beyond Empirical Risk Minimization paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the Ⅱ-Model model in the Temporal Ensembling for Semi-Supervised Learning paper on the CIFAR-10, 250 Labels dataset?
Percentage error
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the SVHN, 2000 Labels dataset?
Accuracy
What metrics were used to measure the UL-Hopfield (ULH) model in the Unsupervised Learning using Pretrained CNN and Associative Memory Bank paper on the Caltech-101, 202 Labels dataset?
Accuracy
What metrics were used to measure the SimCLR-kmediods-PAWS model in the Cold PAWS: Unsupervised class discovery and addressing the cold-start problem for semi-supervised learning paper on the EuroSAT, 100 Labels dataset?
Percentage error
What metrics were used to measure the DPT model in the Diffusion Models and Semi-Supervised Learners Benefit Mutually with Few Labels paper on the ImageNet - 1 labeled data per class dataset?
Top 1 Accuracy
What metrics were used to measure the UL-Hopfield (ULH) model in the Unsupervised Learning using Pretrained CNN and Associative Memory Bank paper on the Caltech-256, 1024 Labels dataset?
Accuracy
What metrics were used to measure the Res-CP model in the Semi-Supervised Hyperspectral Image Classification Using a Probabilistic Pseudo-Label Generation Framework paper on the Salinas dataset?
Overall Accuracy
What metrics were used to measure the MutexMatch (k=0.6C) model in the MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization paper on the CIFAR-10, 80 Labels dataset?
Percentage error
What metrics were used to measure the SimCLR (CoMatch) model in the CoMatch: Semi-supervised Learning with Contrastive Graph Regularization paper on the CIFAR-10, 80 Labels dataset?
Percentage error
What metrics were used to measure the SimPLE model in the SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised Classification paper on the Mini-ImageNet, 4000 Labels dataset?
Accuracy