prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the VGG-16 model in the CINIC-10 is not ImageNet or CIFAR-10 paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the InternImage-H model in the InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the MixMIM-L model in the MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the SEER (RegNet10B - finetuned - 384px) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the MixMIM-B model in the MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the MAE (ViT-H, 448) model in the Masked Autoencoders Are Scalable Vision Learners paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the SEER model in the Self-supervised Pretraining of Visual Features in the Wild paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the SAMix (ResNet-50 Supervised) model in the Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the AutoMix (ResNet-50 Supervised) model in the AutoMix: Unveiling the Power of Mixup for Stronger Classifiers paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the RegNetY-128GF (Supervised) model in the Self-supervised Pretraining of Visual Features in the Wild paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the SwAV model in the Unsupervised Learning of Visual Features by Contrasting Cluster Assignments paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the Barlow Twins (ResNet-50) model in the Barlow Twins: Self-Supervised Learning via Redundancy Reduction paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the BYOL model in the Bootstrap your own latent: A new approach to self-supervised Learning paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the SimCLR model in the A Simple Framework for Contrastive Learning of Visual Representations paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the ResNet-50 (Supervised) model in the Unsupervised Learning of Visual Features by Contrasting Cluster Assignments paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the MoCo v2 model in the Improved Baselines with Momentum Contrastive Learning paper on the Places205 dataset? | Top 1 Accuracy |
What metrics were used to measure the µ2Net (ViT-L/16) model in the An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems paper on the KMNIST dataset? | Accuracy |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the µ2Net (ViT-L/16) model in the An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the ResNet50 model in the In-domain representation learning for remote sensing paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the MoCo-v2 (ResNet18, fine tune) model in the Self-supervised Learning in Remote Sensing: A Review paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the DINO-MC (Wide ResNet) model in the DINO-MC: Self-supervised Contrastive Learning for Remote Sensing Imagery with Multi-sized Local Crops paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the MSMatch Multispectral model in the MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the MSMatch RGB model in the MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the SEER (RegNet10B - linear eval) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the DINO-MC (WRN linear eval)) model in the DINO-MC: Self-supervised Contrastive Learning for Remote Sensing Imagery with Multi-sized Local Crops paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the MoCo-v2 (ResNet18, linear eval) model in the Self-supervised Learning in Remote Sensing: A Review paper on the EuroSAT dataset? | Accuracy (%), accuracy, top-3-accuracy |
What metrics were used to measure the kMobileNet V3 Large 16ch model in the Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks paper on the PlantDoc dataset? | PARAMS |
What metrics were used to measure the Astroformer model in the Astroformer: More Data Might not be all you need for Classification paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the DeiT-B/16-D + OCD(5) model in the OCD: Learning to Overfit with Conditional Diffusion Models paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the SwinV2-B + GradMatch model in the Data-Efficient Training of CNNs and Transformers with Coresets: A Stability Perspective paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the Swin-L model in the Vision Transformers in 2022: An Update on Tiny ImageNet paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the DeiT-B/16 (PUGD) model in the Perturbated Gradients Updating within Unit Space for Deep Learning paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the DeiT-B/16-D + OCD model in the OCD: Learning to Overfit with Conditional Diffusion Models paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the ViT-B/16 (PUGD) model in the Perturbated Gradients Updating within Unit Space for Deep Learning paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the EfficientNet-B1+DCL model in the Direction Concentration Learning: Enhancing Congruency in Machine Learning paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the WaveMixLite-144/7 model in the WaveMix: A Resource-efficient Neural Network for Image Analysis paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the Context-Aware Pipeline model in the Context-Aware Compilation of DNN Training Pipelines across Edge and Cloud paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the ResNeXt-50 (SAMix+DM) model in the Harnessing Hard Mixed Samples with Decoupled Regularizer paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the ResNeXt-50 (SAMix) model in the Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the ResNeXt-50 (AutoMix+DM) model in the Harnessing Hard Mixed Samples with Decoupled Regularizer paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the ResNeXt-50 (AutoMix) model in the AutoMix: Unveiling the Power of Mixup for Stronger Classifiers paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the PreActResNet-18-3 model in the MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the ResNet18 (SAMix) model in the Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the ResNeXt-50 (PuzzleMix+DM) model in the Harnessing Hard Mixed Samples with Decoupled Regularizer paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the UPANets model in the UPANets: Learning from the Universal Pixel Attention Networks paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the ResNet18 (AutoMix) model in the AutoMix: Unveiling the Power of Mixup for Stronger Classifiers paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the DenseNet + Residual Networks model in the DenseNet Models for Tiny ImageNet Classification paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the WaveMixLite-160/13 model in the WaveMix-Lite: A Resource-efficient Neural Network for Image Analysis paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the Convolutional Nystromformer for Vision (CNV) model in the Convolutional Xformers for Vision paper on the Tiny ImageNet Classification dataset? | Validation Acc |
What metrics were used to measure the PDO-eConv (ours) model in the PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions paper on the MNIST-rot-12 dataset? | Test Error |
What metrics were used to measure the DL+PCA+GWO model in the Cervical Cytology Classification Using PCA & GWO Enhanced Deep Features Selection paper on the SIPaKMeD dataset? | Accuracy |
What metrics were used to measure the Fuzzy Distance Ensemble model in the A fuzzy distance-based ensemble of deep models for cervical cancer detection paper on the SIPaKMeD dataset? | Accuracy |
What metrics were used to measure the A Fuzzy Rank-based Ensemble of CNN Models model in the A Fuzzy Rank-based Ensemble of CNN Models for Classification of Cervical Cytology paper on the SIPaKMeD dataset? | Accuracy |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the Cats and Dogs dataset? | Accuracy |
What metrics were used to measure the STS-ResNet model in the Convolutional Spiking Neural Networks for Spatio-Temporal Feature Extraction paper on the N-MNIST dataset? | Accuracy |
What metrics were used to measure the SNN model in the Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data paper on the N-MNIST dataset? | Accuracy |
What metrics were used to measure the FastSNN model in the Robust and accelerated single-spike spiking neural network training with applicability to challenging temporal tasks paper on the N-MNIST dataset? | Accuracy |
What metrics were used to measure the Sparse Spiking Gradient Descent model in the Sparse Spiking Gradient Descent paper on the N-MNIST dataset? | Accuracy |
What metrics were used to measure the efficient adaptive ensembling model in the Efficient Adaptive Ensembling for Image Classification paper on the Oxford-IIIT Pets dataset? | Accuracy, Per-Class Accuracy |
What metrics were used to measure the CeiT-S (384 finetune resolution) model in the Incorporating Convolution Designs into Visual Transformers paper on the Oxford-IIIT Pets dataset? | Accuracy, Per-Class Accuracy |
What metrics were used to measure the CvT-W24 model in the CvT: Introducing Convolutions to Vision Transformers paper on the Oxford-IIIT Pets dataset? | Accuracy, Per-Class Accuracy |
What metrics were used to measure the CeiT-S model in the Incorporating Convolution Designs into Visual Transformers paper on the Oxford-IIIT Pets dataset? | Accuracy, Per-Class Accuracy |
What metrics were used to measure the CeiT-T (384 finetune resolution) model in the Incorporating Convolution Designs into Visual Transformers paper on the Oxford-IIIT Pets dataset? | Accuracy, Per-Class Accuracy |
What metrics were used to measure the CeiT-T model in the Incorporating Convolution Designs into Visual Transformers paper on the Oxford-IIIT Pets dataset? | Accuracy, Per-Class Accuracy |
What metrics were used to measure the Diffusion Classifier (zero-shot) model in the Your Diffusion Model is Secretly a Zero-Shot Classifier paper on the Oxford-IIIT Pets dataset? | Accuracy, Per-Class Accuracy |
What metrics were used to measure the Inceptionv4 model in the Non-binary deep transfer learning for image classification paper on the Caltech-256 dataset? | Accuracy |
What metrics were used to measure the swin-transformer model in the paper on the Caltech-256 dataset? | Accuracy |
What metrics were used to measure the Inceptionv4 (random initialization) model in the Non-binary deep transfer learning for image classification paper on the Caltech-256 dataset? | Accuracy |
What metrics were used to measure the WaveMixLite-256/7 model in the WaveMix: A Resource-efficient Neural Network for Image Analysis paper on the Caltech-256 dataset? | Accuracy |
What metrics were used to measure the CoCa model in the CoCa: Contrastive Captioners are Image-Text Foundation Models paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the LiT model in the LiT: Zero-Shot Transfer with Locked-image text Tuning paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the BASIC model in the Combined Scaling for Zero-shot Transfer Learning paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the EVA-02-CLIP-E/14+ model in the EVA-CLIP: Improved Training Techniques for CLIP at Scale paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the Baseline (ViT-G/14) model in the Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the Model soups (ViT-G/14) model in the Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the MAWS (ViT-2B) model in the The effectiveness of MAE pre-pretraining for billion-scale pretraining paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ALIGN model in the Combined Scaling for Zero-shot Transfer Learning paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the WiSE-FT model in the Robust fine-tuning of zero-shot models paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT-e model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT-G/14 model in the Scaling Vision Transformers paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the SWAG (ViT H/14) model in the Revisiting Weakly Supervised Pre-Training of Visual Perception Models paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the NS (Eff.-L2) model in the Scaling Vision Transformers paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the RegNetY 128GF (Platt) model in the Revisiting Weakly Supervised Pre-Training of Visual Perception Models paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT-H/14 model in the An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the LLE (ViT-H/14, MAE, Edge Aug) model in the A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT H/14 (Platt) model in the Revisiting Weakly Supervised Pre-Training of Visual Perception Models paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the BiT-L (ResNet-152x4) model in the Big Transfer (BiT): General Visual Representation Learning paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT L/16 (Platt) model in the Revisiting Weakly Supervised Pre-Training of Visual Perception Models paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the Vit B/16 (Bamboo) model in the Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the AR-L (Opt Relevance) model in the Optimizing Relevance Maps of Vision Transformers Improves Robustness paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ALIGN-MRL model in the Matryoshka Representation Learning paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT-B/16 (ANN-1.3B) model in the Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT-B/16 (512x512) + Pyramid model in the Pyramid Adversarial Training Improves ViT Performance paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ResNet-101 (JFT-300M) model in the Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT B/16 model in the Revisiting Weakly Supervised Pre-Training of Visual Perception Models paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT-B/32 model in the Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ViT-B/16 (512x512) + Pixel model in the Pyramid Adversarial Training Improves ViT Performance paper on the ObjectNet dataset? | Top-1 Accuracy, Top-5 Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.