prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the SimMatch model in the SimMatch: Semi-supervised Learning with Similarity Matching paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the ShrinkMatch model in the Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch+DM model in the Harnessing Hard Mixed Samples with Decoupled Regularizer paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the NP-Match model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the FreeMatch model in the FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the FlexMatch model in the FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the LiDAM model in the LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the Dash (RA, WRN-28-8) model in the Dash: Semi-Supervised Learning with Dynamic Thresholding paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the ReMixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch+CR model in the Contrastive Regularization for Semi-Supervised Learning paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch (CTA, WRN-28-8) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the CIFAR-10, 500 Labels dataset? | Accuracy |
What metrics were used to measure the SimCLR-kmediods-PAWS model in the Cold PAWS: Unsupervised class discovery and addressing the cold-start problem for semi-supervised learning paper on the Imagenette, 100 Labels dataset? | Percentage error |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the SVHN, 4000 Labels dataset? | Accuracy |
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the CIFAR-100, 1000 Labels dataset? | Percentage correct |
What metrics were used to measure the LiDAM model in the LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching paper on the CIFAR-100, 5000Labels dataset? | Percentage correct |
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the CIFAR-100, 5000Labels dataset? | Percentage correct |
What metrics were used to measure the Meta Pseudo Labels (WRN-28-2) model in the Meta Pseudo Labels paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the FixMatch (CTA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the ReMixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the Triple-GAN-V2 (CNN-13) model in the Triple Generative Adversarial Networks paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the ICT (WRN-28-2) model in the Interpolation Consistency Training for Semi-Supervised Learning paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the R2-D2 (CNN-13) model in the Repetitive Reprediction Deep Decipher for Semi-Supervised Learning paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the VAT+EntMin model in the Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the FCE model in the Flow Contrastive Estimation of Energy-Based Models paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the ICT model in the Interpolation Consistency Training for Semi-Supervised Learning paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the Mean Teacher model in the Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the Triple-GAN-V2 (CNN-13, no aug) model in the Triple Generative Adversarial Networks paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the Pi Model model in the Temporal Ensembling for Semi-Supervised Learning paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the VAT model in the Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the SESEMI SSL (ConvNet) model in the Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the GAN model in the Improved Techniques for Training GANs paper on the SVHN, 1000 labels dataset? | Accuracy |
What metrics were used to measure the SemCo (μ=7) model in the All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Meta Pseudo Labels (WRN-28-2) model in the Meta Pseudo Labels paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the SimMatch model in the SimMatch: Semi-supervised Learning with Similarity Matching paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the PAWS-NN (WRN-28-2) model in the Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the SelfMatch model in the SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Dash (RA, ours) model in the Dash: Semi-Supervised Learning with Dynamic Thresholding paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the NP-Match model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch+DM model in the Harnessing Hard Mixed Samples with Decoupled Regularizer paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch+CR model in the Contrastive Regularization for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the FlexMatch model in the FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the DP-SSL model in the DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the UPS (wrn-28-2) model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch (CTA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the LaplaceNet (WRN-28-2) model in the LaplaceNet: A Hybrid Graph-Energy Neural Network for Deep Semi-Supervised Classification paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the UPS (Shake-Shake) model in the In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the LaplaceNet (CNN-13) model in the LaplaceNet: A Hybrid Graph-Energy Neural Network for Deep Semi-Supervised Classification paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the SWSA model in the There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the ReMixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the R2-D2 (Shake-Shake) model in the Repetitive Reprediction Deep Decipher for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the DMT (WRN-28-2) model in the DMT: Dynamic Mutual Training for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Adaboost model in the Adaptive Boosting for Domain Adaptation: Towards Robust Predictions in Scene Segmentation paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the SHOT-VAE model in the SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Mean Teacher model in the Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the RealMix model in the RealMix: Towards Realistic Semi-Supervised Deep Learning Algorithms paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the UPS (CNN-13) model in the In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Triple-GAN-V2 (ResNet-26) model in the Triple Generative Adversarial Networks paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the ICT (CNN-13) model in the Interpolation Consistency Training for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the LiDAM model in the LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the ICT (WRN-28-2) model in the Interpolation Consistency Training for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the ADA-Net (ConvNet) model in the Semi-Supervised Learning by Augmented Distribution Alignment paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Dual Student (600) model in the Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Triple-GAN-V2 (CNN-13) model in the Triple Generative Adversarial Networks paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the VAT+EntMin model in the Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the VAT model in the Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the SESEMI SSL (ConvNet) model in the Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Pi Model model in the Temporal Ensembling for Semi-Supervised Learning paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Triple-GAN-V2 (CNN-13, no aug) model in the Triple Generative Adversarial Networks paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Bad GAN model in the Good Semi-supervised Learning that Requires a Bad GAN paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the GAN model in the Improved Techniques for Training GANs paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the Γ-model model in the Semi-Supervised Learning with Ladder Networks paper on the CIFAR-10, 4000 Labels dataset? | Percentage error |
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the STL-10 dataset? | Accuracy |
What metrics were used to measure the IIC model in the Invariant Information Clustering for Unsupervised Image Classification and Segmentation paper on the STL-10 dataset? | Accuracy |
What metrics were used to measure the CutOut model in the Improved Regularization of Convolutional Neural Networks with Cutout paper on the STL-10 dataset? | Accuracy |
What metrics were used to measure the REACT (ViT-Large) model in the Learning Customized Visual Models with Retrieval-Augmented Knowledge paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Semi-ViT (ViT-Huge) model in the Semi-supervised Vision Transformers at Scale paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Semi-ViT (ViT-Large) model in the Semi-supervised Vision Transformers at Scale paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 self-distilled (ResNet-152 x3, SK) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 distilled (ResNet-50 x2, SK) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 (ResNet-152 x3, SK) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Semi-ViT (ViT-Base) model in the Semi-supervised Vision Transformers at Scale paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the PAWS (ResNet-50 4x) model in the Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SEER Large (RegNetY-256GF) model in the Self-supervised Pretraining of Visual Features in the Wild paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the PAWS (ResNet-50 2x) model in the Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 distilled (ResNet-50) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Semi-ViT (ViT-Small) model in the Semi-supervised Vision Transformers at Scale paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SEER Small (RegNetY-128GF) model in the Self-supervised Pretraining of Visual Features in the Wild paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimMatchV2 (ResNet-50) model in the SimMatchV2: Semi-Supervised Learning with Graph Consistency paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Semiformer (ViT-S + Conv) model in the Semi-Supervised Vision Transformers paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.