prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the TResNet-L-V2 model in the ImageNet-21K Pretraining for the Masses paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the EfficientNetV2-L model in the EfficientNetV2: Smaller Models and Faster Training paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the EfficientNetV2-M model in the EfficientNetV2: Smaller Models and Faster Training paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the CaiT-M-36 U 224 model in the Going deeper with Image Transformers paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the ImageNet + iNat on WS-DAN model in the Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the CeiT-S (384 finetune resolution) model in the Incorporating Convolution Designs into Visual Transformers paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the EfficientNetV2-S model in the EfficientNetV2: Smaller Models and Faster Training paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the CeiT-S model in the Incorporating Convolution Designs into Visual Transformers paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the GFNet-H-B model in the Global Filter Networks for Image Classification paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the CeiT-T (384 finetune resolution) model in the Incorporating Convolution Designs into Visual Transformers paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the TransBoost-ResNet50 model in the TransBoost: Improving the Best ImageNet Performance using Deep Transduction paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the CeiT-T model in the Incorporating Convolution Designs into Visual Transformers paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the LeViT-192 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the ResMLP-24 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the LeViT-384 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the LeViT-128 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the LeViT-128S model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the LeViT-256 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the ResMLP-12 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the NNCLR model in the With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations paper on the Stanford Cars dataset? | Accuracy |
What metrics were used to measure the EnGraf-Net101 (G=4, H=1) model in the EnGraf-Net: Multiple Granularity Branch Network with Fine-Coarse Graft Grained for Classification Task paper on the FGVC-Aircraft dataset? | Accuracy |
What metrics were used to measure the EnGraf-Net152 (G=4, H=1) model in the EnGraf-Net: Multiple Granularity Branch Network with Fine-Coarse Graft Grained for Classification Task paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the EfficientNet-B3 model in the The MAMe Dataset: On the relevance of High Resolution and Variable Shape image properties paper on the MAMe dataset? | Acc |
What metrics were used to measure the EfficientNet-B0 model in the The MAMe Dataset: On the relevance of High Resolution and Variable Shape image properties paper on the MAMe dataset? | Acc |
What metrics were used to measure the Resnet18 model in the The MAMe Dataset: On the relevance of High Resolution and Variable Shape image properties paper on the MAMe dataset? | Acc |
What metrics were used to measure the VGG11 model in the The MAMe Dataset: On the relevance of High Resolution and Variable Shape image properties paper on the MAMe dataset? | Acc |
What metrics were used to measure the Multi-task model in the A New Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios paper on the Imbalanced CUB-200-2011 dataset? | Accuracy, Average Per-Class Accuracy |
What metrics were used to measure the PC-Softmax model in the Rethinking Softmax with Cross-Entropy: Neural Network Classifier as Mutual Information Estimator paper on the Imbalanced CUB-200-2011 dataset? | Accuracy, Average Per-Class Accuracy |
What metrics were used to measure the EVA (EVA-CLIP, 336) model in the EVA: Exploring the Limits of Masked Visual Representation Learning at Scale paper on the ImageNet (finetuned) dataset? | Number of Params |
What metrics were used to measure the TransBoost-ResNet50 model in the TransBoost: Improving the Best ImageNet Performance using Deep Transduction paper on the SUN397 dataset? | Accuracy |
What metrics were used to measure the CapsNet model in the Dynamic Routing Between Capsules paper on the MultiMNIST dataset? | Percentage error |
What metrics were used to measure the TC-VII (with outside data) model in the Deep Learning for Logo Recognition paper on the FlickrLogos-32 dataset? | Accuracy |
What metrics were used to measure the TC-VII (without outside data) model in the Deep Learning for Logo Recognition paper on the FlickrLogos-32 dataset? | Accuracy |
What metrics were used to measure the DeepLogo (GoogLeNet-GP) model in the DeepLogo: Hitting Logo Recognition with the Deep Neural Network Hammer paper on the FlickrLogos-32 dataset? | Accuracy |
What metrics were used to measure the cFlow model in the Null-sampling for Interpretable and Fair Representations paper on the CelebA 64x64 dataset? | Accuracy |
What metrics were used to measure the cVAE model in the Null-sampling for Interpretable and Fair Representations paper on the CelebA 64x64 dataset? | Accuracy |
What metrics were used to measure the CNN model in the Null-sampling for Interpretable and Fair Representations paper on the CelebA 64x64 dataset? | Accuracy |
What metrics were used to measure the VIT-L/16 (Background, Spinal FC) model in the Reduction of Class Activation Uncertainty with Background Information paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the kNN-CLIP model in the Revisiting a kNN-based Image Classification System with High-capacity Storage paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the Wide-ResNet-101 (Spinal FC) model in the SpinalNet: Deep Neural Network with Gradual Input paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the WideResNet model in the Reduction of Class Activation Uncertainty with Background Information paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the CN(d=128) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the CN(d=64) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the NSRL+CN(d=128) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the NSRL+CN(d=32) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the NSRL+CN(d=64) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the CN(d=32) model in the Toward Understanding Supervised Representation Learning with RKHS and GAN paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the NAT-M4 model in the Neural Architecture Transfer paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the NAT-M3 model in the Neural Architecture Transfer paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the NAT-M2 model in the Neural Architecture Transfer paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the iGPT-L model in the Generative Pretraining from Pixels paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the NAT-M1 model in the Neural Architecture Transfer paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the VGG-19bn model in the SpinalNet: Deep Neural Network with Gradual Input paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the Diffusion Classifier (zero-shot) model in the Your Diffusion Model is Secretly a Zero-Shot Classifier paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the FixMatch (CTA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the ReMixMatch model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the AMDIM model in the Learning Representations by Maximizing Mutual Information Across Views paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the AMDIM-L model in the Generative Pretraining from Pixels paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the ReMixMatch (K=4) model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the AMDIM model in the A Framework For Contrastive Self-Supervised Learning And Designing A New Approach paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the ReMixMatch (K=1) model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the MP* model in the Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the UDA model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the YADIM model in the A Framework For Contrastive Self-Supervised Learning And Designing A New Approach paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the FixMatch (RA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the NSGANetV2 model in the NSGANetV2: Evolutionary Multi-Objective Surrogate-Assisted Neural Architecture Search paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the SESN model in the Scale-Equivariant Steerable Networks paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the Harmonic WRN-16-8 model in the Harmonic Networks with Limited Training Samples paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the wrn16/8 D8 D4 D1 model in the General $E(2)$-Equivariant Steerable CNNs paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the DLME (ResNet-50, linear) model in the DLME: Deep Local-flatness Manifold Embedding paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the MixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the MixMatch model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the wrn16/8* D8 D4 D1 model in the General $E(2)$-Equivariant Steerable CNNs paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the wrn16/8* D1 D1 D1 model in the General $E(2)$-Equivariant Steerable CNNs paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the wrn16/8 D1 D1 D1 model in the General $E(2)$-Equivariant Steerable CNNs paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the IIC model in the Invariant Information Clustering for Unsupervised Image Classification and Segmentation paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the IIC model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the SOPCNN model in the Stochastic Optimization of Plain Convolutional Neural Networks with Simple methods paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the TS model in the Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the CutOut model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the Cutout model in the Improved Regularization of Convolutional Neural Networks with Cutout paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the wrn16/8 model in the General $E(2)$-Equivariant Steerable CNNs paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the Hamiltonian model in the Reversible Architectures for Arbitrarily Deep Residual Neural Networks paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the ResNet-18+MM+FRL model in the Learning Class Unique Features in Fine-Grained Visual Classification paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the MidPoint model in the Reversible Architectures for Arbitrarily Deep Residual Neural Networks paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the cosine function model in the Image Augmentation for Object Image Classification Based On Combination of PreTrained CNN and SVM paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the HybridNet model in the HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the Leapfrog model in the Reversible Architectures for Arbitrarily Deep Residual Neural Networks paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the skewing model in the Image Augmentation for Object Image Classification Based On Combination of PreTrained CNN and SVM paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the elastic distortion(2) model in the Image Augmentation for Object Image Classification Based On Combination of PreTrained CNN and SVM paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the PSLR-knn model in the Probabilistic Structural Latent Representation for Unsupervised Embedding paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the elastic distortion(1) model in the Image Augmentation for Object Image Classification Based On Combination of PreTrained CNN and SVM paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the ResNet baseline model in the HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the Greedy InfoMax (GIM) model in the Putting An End to End-to-End: Gradient-Isolated Learning of Representations paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the rotation model in the Image Augmentation for Object Image Classification Based On Combination of PreTrained CNN and SVM paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
What metrics were used to measure the ResNet18(BN, 4) model in the Extended Batch Normalization paper on the STL-10 dataset? | Percentage correct, FLOPS, PARAMS |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.