prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the ELoPE model in the ELoPE: Fine-Grained Visual Classification with Efficient Localization, Pooling and Embedding paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ViT-NeT (SwinV2-B) model in the ViT-NeT: Interpretable Vision Transformers with Neural Tree Decoder paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the AFTrans model in the A free lunch from ViT:Adaptive Attention Multi-scale Fusion Transformer for Fine-grained Visual Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the DB model in the Fine-grained Recognition: Accounting for Subtle Differences between Similar Classes paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the Attribute Mix+ model in the Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the AutoAugment model in the AutoAugment: Learning Augmentation Policies from Data paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the BCN model in the Fine-Grained Visual Classification with Batch Confusion Norm paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the DF-GMM model in the Weakly Supervised Fine-Grained Image Classification via Guassian Mixture Model Oriented Discriminative Learning paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the TransFG model in the TransFG: A Transformer Architecture for Fine-grained Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the CRA-CNN model in the Contrastively-reinforced Attention Convolutional Neural Network for Fine-grained Image Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the S3N model in the Selective Sparse Sampling for Fine-Grained Image Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the EfficientNet-B7 model in the EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the Grafit (RegNet-8GF) model in the Grafit: Learning fine-grained image representations with coarse labels paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the GPipe model in the GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the Cross-X model in the Cross-X Learning for Fine-Grained Visual Categorization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ACNet model in the Attention Convolutional Binary Neural Tree for Fine-Grained Visual Categorization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the SEB+EfficientNet-B5 model in the On the Eigenvalues of Global Covariance Pooling for Fine-grained Visual Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the PCA model in the Progressive Co-Attention Network for Fine-grained Visual Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the WS-DAN model in the See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the CIN model in the Channel Interaction Networks for Fine-Grained Image Categorization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the LIO/ResNet-50 (multi-stage) model in the Look-into-Object: Self-supervised Structure Modeling for Object Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the Grad-CAM model in the Grad-CAM guided channel-spatial attention module for fine-grained visual classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the FixSENet-154 model in the Fixing the train-test resolution discrepancy paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the Assemble-ResNet-FGVC-50 model in the Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the MC Loss (B-CNN) model in the The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ELP model in the A Simple Episodic Linear Probe Improves Visual Recognition in the Wild paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the MHEM (strong ResNet50 baseline) model in the Penalizing the Hard Example But Not Too Much: A Strong Baseline for Fine-Grained Visual Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the GCL model in the Graph-propagation based Correlation Learning for Weakly Supervised Fine-grained Image Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the SEF model in the Learning Semantically Enhanced Feature for Fine-Grained Image Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the AENet model in the Alignment Enhancement Network for Fine-grained Visual Categorization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the NTS-Net (K=4) model in the Learning to Navigate for Fine-grained Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the Bamboo (ViT-B/16) model in the Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the DFL-CNN model in the Learning a Discriminative Filter Bank within a CNN for Fine-grained Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the TASN model in the Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-grained Image Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ViT-L (attn finetune) model in the Three things everyone should know about Vision Transformers paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the AutoFormer-S | 384 model in the AutoFormer: Searching Transformers for Visual Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the MPN-COV model in the Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the DeiT-B model in the Training data-efficient image transformers & distillation through attention paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ResNet101-swp model in the Deep CNNs With Spatially Weighted Pooling for Fine-Grained Car Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the MAMC model in the Multi-Attention Multi-Class Constraint for Fine-grained Image Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the NAT-M4 model in the Neural Architecture Transfer paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the PC-DenseNet-161 model in the Pairwise Confusion for Fine-Grained Visual Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the MACNN model in the Learning Multi-Attention Convolutional Neural Network for Fine-Grained Image Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ResNet50 (A1) model in the ResNet strikes back: An improved training procedure in timm paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the NAT-M3 model in the Neural Architecture Transfer paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the CS-Parts model in the Classification-Specific Parts for Improving Fine-Grained Visual Categorization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the CS-Part model in the Classification-Specific Parts for Improving Fine-Grained Visual Categorization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the NAT-M2 model in the Neural Architecture Transfer paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the NAT-M1 model in the Neural Architecture Transfer paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the BYOL+CVSA (ResNet-50) model in the Exploring Localization for Self-supervised Fine-grained Contrastive Learning paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ResMLP-24 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the MPFG + CLIP model in the Multiscale patch-based feature graphs for image classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ResMLP-12 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ResNet101-swp model in the Deep CNNs With Spatially Weighted Pooling for Fine-Grained Car Recognition paper on the CarFlag-563 dataset? | Accuracy |
What metrics were used to measure the MetaFormer
(MetaFormer-2,384) model in the MetaFormer: A Unified Meta Framework for Fine-Grained Recognition paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the HERBS model in the Fine-grained Visual Classification with High-temperature Refinement and Background Suppression paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the PIM model in the A Novel Plug-in Module for Fine-Grained Visual Classification paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the TransIFC model in the TransIFC: Invariant Cues-aware Feature Concentration Learning for Efficient Fine-grained Bird Image Classification paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the TransFG model in the TransFG: A Transformer Architecture for Fine-grained Recognition paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the IELT model in the Fine-Grained Visual Classification via Internal Ensemble Learning Transformer paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the FVE model in the End-to-end Learning of a Fisher Vector Encoding for Part Features in Fine-grained Recognition paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the TPSKG model in the Transformer with Peak Suppression and Knowledge Guidance for Fine-grained Image Recognition paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the FixSENet-154 model in the Fixing the train-test resolution discrepancy paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the MGE-CNN model in the Learning a Mixture of Granularity-Specific Experts for Fine-Grained Categorization paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the CS-Parts model in the Classification-Specific Parts for Improving Fine-Grained Visual Categorization paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the CS-Part model in the Classification-Specific Parts for Improving Fine-Grained Visual Categorization paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the API-Net model in the Learning Attentive Pairwise Interaction for Fine-Grained Classification paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the PAIRS model in the Aligned to the Object, not to the Image: A Unified Pose-aligned Representation for Fine-grained Recognition paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the Cross-X model in the Cross-X Learning for Fine-Grained Visual Categorization paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the MaxEnt-CNN model in the Maximum-Entropy Fine Grained Classification paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the PC-DenseNet-161 model in the Pairwise Confusion for Fine-Grained Visual Classification paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the BYOL+CVSA (ResNet-50) model in the Exploring Localization for Self-supervised Fine-grained Contrastive Learning paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the Bilinear-CNN model in the Bilinear CNNs for Fine-grained Visual Recognition paper on the NABirds dataset? | Accuracy |
What metrics were used to measure the PHOC descriptor + Fisher Vector Encoding model in the Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual Features paper on the Con-Text dataset? | mAP |
What metrics were used to measure the µ2Net (ViT-L/16) model in the An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems paper on the SUN397 dataset? | Accuracy |
What metrics were used to measure the SEER (RegNet10B - linear eval) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the SUN397 dataset? | Accuracy |
What metrics were used to measure the Bamboo (ViT-B/16) model in the Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy paper on the SUN397 dataset? | Accuracy |
What metrics were used to measure the TWIST (ResNet-50) model in the Self-Supervised Learning by Estimating Twin Class Distributions paper on the SUN397 dataset? | Accuracy |
What metrics were used to measure the NNCLR model in the With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations paper on the SUN397 dataset? | Accuracy |
What metrics were used to measure the Conviformer-B model in the Conviformers: Convolutionally guided Vision Transformer paper on the Herbarium 2021 Half–Earth dataset? | Test F1 score |
What metrics were used to measure the CSWin-L model in the Learning Multi-Subset of Classes for Fine-Grained Food Recognition paper on the FoodX-251 dataset? | Accuracy (%) |
What metrics were used to measure the VOLO-D5 model in the Learning Multi-Subset of Classes for Fine-Grained Food Recognition paper on the FoodX-251 dataset? | Accuracy (%) |
What metrics were used to measure the IELT model in the Fine-Grained Visual Classification via Internal Ensemble Learning Transformer paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the BiT-L (ResNet) model in the Big Transfer (BiT): General Visual Representation Learning paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the µ2Net (ViT-L/16) model in the An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the BiT-M (ResNet) model in the Big Transfer (BiT): General Visual Representation Learning paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the Wide-ResNet-101 (Spinal FC) model in the SpinalNet: Deep Neural Network with Gradual Input paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the TResNet-L model in the TResNet: High Performance GPU-Dedicated Architecture paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the Grafit (RegNet-8GF) model in the Grafit: Learning fine-grained image representations with coarse labels paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the TNT-B model in the Transformer in Transformer paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the Assemble-ResNet model in the Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the DeiT-B model in the Training data-efficient image transformers & distillation through attention paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the DenseNet-201(Spinal FC) model in the A Comprehensive Study on Torchvision Pre-trained Models for Fine-grained Inter-species Classification paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the NAT-M4 model in the Neural Architecture Transfer paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the DenseNet-201 model in the A Comprehensive Study on Torchvision Pre-trained Models for Fine-grained Inter-species Classification paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the NAT-M3 model in the Neural Architecture Transfer paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the ResNet50 (A1) model in the ResNet strikes back: An improved training procedure in timm paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the ResMLP-24 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the NAT-M2 model in the Neural Architecture Transfer paper on the Oxford 102 Flowers dataset? | Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.