prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the PAWS (ResNet-50) model in the Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the TWIST (ResNet-50 x2) model in the Self-Supervised Learning by Estimating Twin Class Distributions paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimMatch (ResNet-50) model in the SimMatch: Semi-supervised Learning with Similarity Matching paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the FixMatch-EMAN model in the Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the CowMix (ResNet-152) model in the Milking CowMask for Semi-Supervised Image Classification paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 (ResNet-50 x2) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Meta Pseudo Labels (ResNet-50) model in the Meta Pseudo Labels paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the CoMatch (w. MoCo v2) model in the CoMatch: Semi-supervised Learning with Contrastive Graph Regularization paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the S4L-MOAM (ResNet-50 4×) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the CPC v2 (ResNet-161) model in the Data-Efficient Image Recognition with Contrastive Predictive Coding paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the RELICv2 (ResNet-50) model in the Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet? paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the WCL (ResNet-50) model in the Weakly Supervised Contrastive Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the NNCLR (ResNet-50) model in the With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Barlow Twins (ResNet-50) model in the Barlow Twins: Self-Supervised Learning via Redundancy Reduction paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the I-VNE+ (ResNet-50) model in the VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 (ResNet-50) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the FlexMatch model in the FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Dual Student model in the Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the NP-Match(ResNet-50) model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLR (ResNet-50 4×) model in the A Simple Framework for Contrastive Learning of Visual Representations paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Rotation + VAT + Ent. Min. model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLR (ResNet-50 2×) model in the A Simple Framework for Contrastive Learning of Visual Representations paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Mean Teacher (ResNeXt-152) model in the Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the OBoW (ResNet-50) model in the OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the R2-D2 (ResNet-18) model in the Repetitive Reprediction Deep Decipher for Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the FixMatch model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLR (ResNet-50) model in the A Simple Framework for Contrastive Learning of Visual Representations paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the CPC model in the Representation Learning with Contrastive Predictive Coding paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the S4L-Rotation (ResNet-50) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Rotation (joint training) model in the Semi-supervised Sequence-to-sequence ASR using Unpaired Speech and Text paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the PIRL (ResNet-50) model in the Self-Supervised Learning of Pretext-Invariant Representations paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the S4L-Exemplar (ResNet-50) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Exemplar (joint training) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the VAT + Entropy Minimization (ResNet-50) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the VAT + Entropy Minimization model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the VAT (ResNet-50) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the VAT model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Pseudolabeling (ResNet-50) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Pseudolabeling model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Exemplar Fine-tuned (ResNet-50) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Exemplar model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the BigBiGAN (RevNet-50 ×4, BN+CReLU) model in the Large Scale Adversarial Representation Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Rotation Fine-tuned (ResNet-50) model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Rotation model in the S4L: Self-Supervised Semi-Supervised Learning paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Instance Discrimination model in the Unsupervised Feature Learning via Non-Parametric Instance Discrimination paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the InstDisc (ResNet-50) model in the Unsupervised Feature Learning via Non-Parametric Instance Discrimination paper on the ImageNet - 10% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the CIFAR-10, 2000 Labels dataset? | Accuracy |
What metrics were used to measure the ICT (CNN-13) model in the Interpolation Consistency Training for Semi-Supervised Learning paper on the CIFAR-10, 2000 Labels dataset? | Accuracy |
What metrics were used to measure the Dual Student (600) model in the Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning paper on the CIFAR-10, 2000 Labels dataset? | Accuracy |
What metrics were used to measure the SESEMI SSL (ConvNet) model in the Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning paper on the CIFAR-10, 2000 Labels dataset? | Accuracy |
What metrics were used to measure the MutexMatch (k=0.6C) model in the MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization paper on the CIFAR-10, 20 Labels dataset? | Percentage error |
What metrics were used to measure the CoMatch (SimCLR) model in the CoMatch: Semi-supervised Learning with Contrastive Graph Regularization paper on the CIFAR-10, 20 Labels dataset? | Percentage error |
What metrics were used to measure the Semi-MMDC model in the Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering paper on the CIFAR-10, 20 Labels dataset? | Percentage error |
What metrics were used to measure the SimCLR-kmediods-finetuned model in the Cold PAWS: Unsupervised class discovery and addressing the cold-start problem for semi-supervised learning paper on the DeepWeeds, 99 Labels dataset? | Percentage error |
What metrics were used to measure the MutexMatch (k=0.6C) model in the MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization paper on the CIFAR-100, 200 Labels dataset? | Percentage error |
What metrics were used to measure the ReMixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the cifar10, 250 Labels dataset? | Percentage correct |
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the cifar10, 250 Labels dataset? | Percentage correct |
What metrics were used to measure the RealMix model in the RealMix: Towards Realistic Semi-Supervised Deep Learning Algorithms paper on the cifar10, 250 Labels dataset? | Percentage correct |
What metrics were used to measure the VAT model in the Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning paper on the cifar10, 250 Labels dataset? | Percentage correct |
What metrics were used to measure the REACT (ViT-Large) model in the Learning Customized Visual Models with Retrieval-Augmented Knowledge paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Semi-ViT (ViT-Huge) model in the Semi-supervised Vision Transformers at Scale paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Semi-ViT (ViT-Large) model in the Semi-supervised Vision Transformers at Scale paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 self-distilled (ResNet-152 x3, SK) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 distilled (ResNet-50 x2, SK) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the MSN (ViT-B/4) model in the Masked Siamese Networks for Label-Efficient Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 (ResNet-152 x3, SK) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 distilled (ResNet-50) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimMatchV2 (ResNet-50) model in the SimMatchV2: Semi-Supervised Learning with Graph Consistency paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the DebiasPL (ResNet-50) model in the Debiased Learning from Naturally Imbalanced Pseudo-Labels paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the BYOL (ResNet-200 x2) model in the Bootstrap your own latent: A new approach to self-supervised Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Semi-ViT (ViT-Base) model in the Semi-supervised Vision Transformers at Scale paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the PAWS (ResNet-50 4x) model in the Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the PAWS (ResNet-50 2x) model in the Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the BYOL (ResNet-50 x4) model in the Bootstrap your own latent: A new approach to self-supervised Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the TWIST (ResNet-50 x2) model in the Self-Supervised Learning by Estimating Twin Class Distributions paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimMatch (ResNet-50) model in the SimMatch: Semi-supervised Learning with Similarity Matching paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the CoMatch (w. MoCo v2) model in the CoMatch: Semi-supervised Learning with Contrastive Graph Regularization paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the PAWS (ResNet-50) model in the Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 (ResNet-50 ×2) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the WCL (ResNet-50) model in the Weakly Supervised Contrastive Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the FNC (ResNet-50) model in the Boosting Contrastive Self-Supervised Learning with False Negative Cancellation paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLR (ResNet-50 4×) model in the A Simple Framework for Contrastive Learning of Visual Representations paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the FixMatch-EMAN model in the Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the BYOL (ResNet-50 x2) model in the Bootstrap your own latent: A new approach to self-supervised Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the iBOT (ViT-S/16) model in the iBOT: Image BERT Pre-Training with Online Tokenizer paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SEER Large (RegNetY-256GF) model in the Self-supervised Pretraining of Visual Features in the Wild paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SemiReward model in the SemiReward: A General Reward Model for Semi-supervised Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLR (ResNet-50 2×) model in the A Simple Framework for Contrastive Learning of Visual Representations paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the RELICv2 model in the Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet? paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SimCLRv2 (ResNet-50) model in the Big Self-Supervised Models are Strong Semi-Supervised Learners paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SEER Small (RegNetY-128GF) model in the Self-supervised Pretraining of Visual Features in the Wild paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the NNCLR (ResNet-50) model in the With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the I-VNE+ (ResNet-50) model in the VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the Barlow Twins (ResNet-50) model in the Barlow Twins: Self-Supervised Learning via Redundancy Reduction paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the VICREG (Resnet-50) model in the VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SwAV (ResNet-50) model in the Unsupervised Learning of Visual Features by Contrasting Cluster Assignments paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the BYOL (ResNet-50) model in the Bootstrap your own latent: A new approach to self-supervised Learning paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the CPC v2 (ResNet-161) model in the paper on the ImageNet - 1% labeled data dataset? | Top 1 Accuracy, Top 5 Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.