prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the PSN (SEW ResNet-18) model in the Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the MUXNet-xs model in the MUXConv: Information Multiplexing in Convolutional Neural Networks paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the PAWS (ResNet-50, 1% labels) model in the Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the GhostNet ×0.5 model in the GhostNet: More Features from Cheap Operations paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the OTTT model in the Online Training Through Time for Spiking Neural Networks paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the DY-MobileNetV2 ×0.35 model in the Dynamic Convolution: Attention over Convolution Kernels paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the BBG (ResNet-34) model in the Balanced Binary Neural Networks with Gated Residual paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the SimpleNetV1-small-05 model in the Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the BBG (ResNet-18) model in the Balanced Binary Neural Networks with Gated Residual paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the FireCaffe (AlexNet) model in the FireCaffe: near-linear acceleration of deep neural network training on compute clusters paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the Amoeba-C model in the Regularized Evolution for Image Classifier Architecture Search paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the IFQ-AlexNet model in the IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the DeiT-Ti [deit] model in the Generic-to-Specific Distillation of Masked Autoencoders paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the ResNet-18 [resnet] model in the Generic-to-Specific Distillation of Masked Autoencoders paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the SRDA (ours) model in the Continual Learning with Deep Streaming Regularized Discriminant Analysis paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the EVA (EVA-CLIP) model in the EVA: Exploring the Limits of Masked Visual Representation Learning at Scale paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the MobileNet-224 ×1.0 model in the MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the Inception V2 model in the Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift paper on the ImageNet dataset? | Top 1 Accuracy, Number of params, GFLOPs, Top 5 Accuracy, Hardware Burden, Operations per network pass, Continual Weighted Accuracy |
What metrics were used to measure the WRN-28-2 + UDA+AutoDropout model in the AutoDropout: Learning Dropout Patterns to Regularize Deep Networks paper on the cifar-10,4000 dataset? | Percentage error |
What metrics were used to measure the PCGAN-CHAR model in the PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters paper on the Noisy MNIST (Motion) dataset? | Accuracy |
What metrics were used to measure the Pixel-level RC model in the Pixel-level Reconstruction and Classification for Noisy Handwritten Bangla Characters paper on the Noisy MNIST (Motion) dataset? | Accuracy |
What metrics were used to measure the SNN model in the Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data paper on the DVS128 Gesture dataset? | Accuracy |
What metrics were used to measure the MentorMix model in the Faster Meta Update Strategy for Noise-Robust Deep Learning paper on the CIFAR-100, 60% Symmetric Noise dataset? | Percentage correct |
What metrics were used to measure the EGNN+Transduction model in the Edge-labeling Graph Neural Network for Few-shot Learning paper on the Tiered ImageNet 5-way (5-shot) dataset? | Accuracy |
What metrics were used to measure the Relation Net model in the Learning to Compare: Relation Network for Few-Shot Learning paper on the Tiered ImageNet 5-way (5-shot) dataset? | Accuracy |
What metrics were used to measure the Reptile + BN model in the On First-Order Meta-Learning Algorithms paper on the Tiered ImageNet 5-way (5-shot) dataset? | Accuracy |
What metrics were used to measure the MAML+Transduction model in the Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks paper on the Tiered ImageNet 5-way (5-shot) dataset? | Accuracy |
What metrics were used to measure the MAML model in the Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks paper on the Tiered ImageNet 5-way (5-shot) dataset? | Accuracy |
What metrics were used to measure the Prototypical Net model in the Prototypical Networks for Few-shot Learning paper on the Tiered ImageNet 5-way (5-shot) dataset? | Accuracy |
What metrics were used to measure the Reptile model in the On First-Order Meta-Learning Algorithms paper on the Tiered ImageNet 5-way (5-shot) dataset? | Accuracy |
What metrics were used to measure the InternImage-H model in the InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the MAWS (ViT-2B) model in the The effectiveness of MAE pre-pretraining for billion-scale pretraining paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the MetaFormer
(MetaFormer-2,384,extra_info) model in the MetaFormer: A Unified Meta Framework for Fine-Grained Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the Hiera-H (448px) model in the Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the MAE (ViT-H, 448) model in the Masked Autoencoders Are Scalable Vision Learners paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the SWAG (ViT H/14) model in the Revisiting Weakly Supervised Pre-Training of Visual Perception Models paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the SEER (RegNet10B - finetuned - 384px) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the MetaFormer
(MetaFormer-2,384) model in the MetaFormer: A Unified Meta Framework for Fine-Grained Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the OMNIVORE (Swin-L) model in the Omnivore: A Single Model for Many Visual Modalities paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the RegNet-8GF model in the Grafit: Learning fine-grained image representations with coarse labels paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the VL-LTR (ViT-B-16) model in the VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the MixMIM-L model in the MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the DeiT-B model in the Training data-efficient image transformers & distillation through attention paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the CeiT-S (384 finetune resolution) model in the Incorporating Convolution Designs into Visual Transformers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the GPaCo (ResNet-152) model in the Generalized Parametric Contrastive Learning paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the CaiT-M-36 U 224 model in the Going deeper with Image Transformers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the MixMIM-B model in the MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the GPaCo (ResNet-50) model in the Generalized Parametric Contrastive Learning paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the CBD-ENS (ResNet-101) model in the Class-Balanced Distillation for Long-Tailed Visual Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ViT-L (attn finetune) model in the Three things everyone should know about Vision Transformers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the PaCo(ResNet-152) model in the Parametric Contrastive Learning paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the VL-LTR (ResNet-50) model in the VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the BS-CMO (ResNet-50) model in the The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the CBD-ENS (ResNet-50) model in the Class-Balanced Distillation for Long-Tailed Visual Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the CeiT-S model in the Incorporating Convolution Designs into Visual Transformers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the TADE (ResNet-50) model in the Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the CeiT-T (384 finetune resolution) model in the Incorporating Convolution Designs into Visual Transformers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the RIDE (ResNet-50) model in the Long-tailed Recognition by Routing Diverse Distribution-Aware Experts paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNeXt-101 (SAMix) model in the Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNeXt-101 (AutoMix) model in the AutoMix: Unveiling the Power of Mixup for Stronger Classifiers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the LADE model in the Disentangling Label Distribution for Long-tailed Visual Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-50 model in the Grafit: Learning fine-grained image representations with coarse labels paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-152 model in the Feature Space Augmentation for Long-Tailed Data paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-152 model in the Class-Balanced Loss Based on Effective Number of Samples paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the MetaSAug model in the MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-101 model in the Feature Space Augmentation for Long-Tailed Data paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-101 model in the Class-Balanced Loss Based on Effective Number of Samples paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the LeViT-384 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the LeViT-256 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-50 model in the Feature Space Augmentation for Long-Tailed Data paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-50 (SAMix) model in the Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-50 (AutoMix) model in the AutoMix: Unveiling the Power of Mixup for Stronger Classifiers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the CeiT-T model in the Incorporating Convolution Designs into Visual Transformers paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResMLP-24 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-50 model in the Class-Balanced Loss Based on Effective Number of Samples paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the LeViT-192 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the Inception-V3 model in the The iNaturalist Species Classification and Detection Dataset paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResMLP-12 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the LeViT-128S model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the LeViT-128 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-50 model in the ClusterFit: Improving Generalization of Visual Representations paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the ResNet-50 model in the Unsupervised Learning of Visual Features by Contrasting Cluster Assignments paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the Barlow Twins (ResNet-50) model in the Barlow Twins: Self-Supervised Learning via Redundancy Reduction paper on the iNaturalist 2018 dataset? | Top-1 Accuracy |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the Stanford Online Products dataset? | Accuracy |
What metrics were used to measure the V-MoE-H/14 (Every-2) model in the Scaling Vision with Sparse Mixture of Experts paper on the JFT-300M dataset? | prec@1 |
What metrics were used to measure the V-MoE-H/14 (Last-5) model in the Scaling Vision with Sparse Mixture of Experts paper on the JFT-300M dataset? | prec@1 |
What metrics were used to measure the V-MoE-L/16 (Every-2) model in the Scaling Vision with Sparse Mixture of Experts paper on the JFT-300M dataset? | prec@1 |
What metrics were used to measure the VIT-H/14 model in the Scaling Vision with Sparse Mixture of Experts paper on the JFT-300M dataset? | prec@1 |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the Imagenette dataset? | Accuracy |
What metrics were used to measure the SmoothNetV1 model in the SmoothNets: Optimizing CNN architecture design for differentially private deep learning paper on the Imagenette dataset? | Accuracy |
What metrics were used to measure the MLP-DecAug model in the DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation paper on the Colored-MNIST(with spurious correlation) dataset? | Accuracy |
What metrics were used to measure the MLP-REx model in the Out-of-Distribution Generalization via Risk Extrapolation (REx) paper on the Colored-MNIST(with spurious correlation) dataset? | Accuracy |
What metrics were used to measure the MLP-IRM model in the Invariant Risk Minimization paper on the Colored-MNIST(with spurious correlation) dataset? | Accuracy |
What metrics were used to measure the F-IRMGames model in the Invariant Risk Minimization Games paper on the Colored-MNIST(with spurious correlation) dataset? | Accuracy |
What metrics were used to measure the MLP-ERM model in the Empirical Risk Minimization for Stochastic Convex Optimization: $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds paper on the Colored-MNIST(with spurious correlation) dataset? | Accuracy |
What metrics were used to measure the JiGen model in the Domain Generalization by Solving Jigsaw Puzzles paper on the Colored-MNIST(with spurious correlation) dataset? | Accuracy |
What metrics were used to measure the Fuzzy Distance Ensemble model in the A fuzzy distance-based ensemble of deep models for cervical cancer detection paper on the HErlev dataset? | Accuracy |
What metrics were used to measure the DL+PCA+GWO model in the Cervical Cytology Classification Using PCA & GWO Enhanced Deep Features Selection paper on the HErlev dataset? | Accuracy |
What metrics were used to measure the Diffusion Classifier (zero-shot) model in the Your Diffusion Model is Secretly a Zero-Shot Classifier paper on the ObjectNet (ImageNet classes) dataset? | Top 1 Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.