prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Graph-Based High-Order Relationship model in the Graph-Based High-Order Relation Discovery for Fine-Grained Recognition paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the SnapMix model in the SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the CS-Parts model in the Classification-Specific Parts for Improving Fine-Grained Visual Categorization paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the MGE-CNN model in the Learning a Mixture of Granularity-Specific Experts for Fine-Grained Categorization paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the PAIRS model in the Aligned to the Object, not to the Image: A Unified Pose-aligned Representation for Fine-grained Recognition paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the BCN model in the Fine-Grained Visual Classification with Batch Confusion Norm paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the Knowledge Transfer model in the Knowledge Transfer Based Fine-grained Visual Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the AttNet & AffNet model in the Fine-Grained Visual Classification with Efficient End-to-end Localization paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the PCA-Net model in the Progressive Co-Attention Network for Fine-grained Visual Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the DF-GMM model in the Weakly Supervised Fine-Grained Image Classification via Guassian Mixture Model Oriented Discriminative Learning paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the MPN-COV model in the Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the GAT model in the Human Attention in Fine-grained Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the DB model in the Fine-grained Recognition: Accounting for Subtle Differences between Similar Classes paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the S3N model in the Selective Sparse Sampling for Fine-Grained Image Recognition paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the ELoPE model in the ELoPE: Fine-Grained Visual Classification with Efficient Localization, Pooling and Embedding paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the Grad-CAM model in the Grad-CAM guided channel-spatial attention module for fine-grained visual classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the CIN model in the Channel Interaction Networks for Fine-Grained Image Categorization paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the GCL model in the Graph-propagation based Correlation Learning for Weakly Supervised Fine-grained Image Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the CRA-CNN model in the Contrastively-reinforced Attention Convolutional Neural Network for Fine-grained Image Recognition paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the CMN model in the Fine-grained Classification via Categorical Memory Networks paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the MHEM (a strong ResNet50 baseline) model in the Penalizing the Hard Example But Not Too Much: A Strong Baseline for Fine-Grained Visual Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the Hierarchical Semantic Embedding model in the Fine-Grained Representation Learning and Recognition by Exploiting Hierarchical Semantic Embedding paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the ACNet model in the Attention Convolutional Binary Neural Tree for Fine-Grained Visual Categorization paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the Cross-X model in the Cross-X Learning for Fine-Grained Visual Categorization paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the NTS-Net (K = 4) model in the Learning to Navigate for Fine-grained Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the DFB model in the Learning a Discriminative Filter Bank within a CNN for Fine-grained Recognition paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the MC Loss (ResNet50) model in the The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the SEF model in the Learning Semantically Enhanced Feature for Fine-Grained Image Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the PC-DenseNet-161 model in the Pairwise Confusion for Fine-Grained Visual Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the MC Loss (B-CNN) model in the The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the A3M model in the Attribute-Aware Attention Model for Fine-grained Representation Learning paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the Bilinear-CNN model in the Bilinear CNN Models for Fine-Grained Visual Recognition paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the PS-CNN model in the Part-Stacked CNN for Fine-Grained Visual Categorization paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the Part RCNN model in the Part-based R-CNNs for Fine-grained Category Detection paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the Deformable Part Descriptors model in the Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction paper on the CUB-200-2011 dataset? | Accuracy |
What metrics were used to measure the Vanilla FC layer only model in the ProgressiveSpinalNet architecture for FC layers paper on the MNIST dataset? | Accuracy |
What metrics were used to measure the PC-Softmax model in the Rethinking Softmax with Cross-Entropy: Neural Network Classifier as Mutual Information Estimator paper on the Imbalanced CUB-200-2011 dataset? | Accuracy, Average Per-Class Accuracy |
What metrics were used to measure the ResNeXt-101 model in the A Comprehensive Study on Torchvision Pre-trained Models for Fine-grained Inter-species Classification paper on the Fruits-360 dataset? | Accuracy (%), Accuracy |
What metrics were used to measure the VGG-19bn model in the SpinalNet: Deep Neural Network with Gradual Input paper on the Fruits-360 dataset? | Accuracy (%), Accuracy |
What metrics were used to measure the Pre trained wide-resnet-101 model in the ProgressiveSpinalNet architecture for FC layers paper on the Fruits-360 dataset? | Accuracy (%), Accuracy |
What metrics were used to measure the ResNet101-swp model in the Deep CNNs With Spatially Weighted Pooling for Fine-Grained Car Recognition paper on the CompCars dataset? | Accuracy |
What metrics were used to measure the Fine-Tuning DARTS model in the Fine-Tuning DARTS for Image Classification paper on the CompCars dataset? | Accuracy |
What metrics were used to measure the Resnet50 + COOC model in the Fine-Grained Vehicle Classification with Unsupervised Parts Co-occurrence Learning paper on the CompCars dataset? | Accuracy |
What metrics were used to measure the A3M model in the Attribute-Aware Attention Model for Fine-grained Representation Learning paper on the CompCars dataset? | Accuracy |
What metrics were used to measure the GoogLeNet model in the A Large-Scale Car Dataset for Fine-Grained Categorization and Verification paper on the CompCars dataset? | Accuracy |
What metrics were used to measure the AlexNet model in the A Large-Scale Car Dataset for Fine-Grained Categorization and Verification paper on the CompCars dataset? | Accuracy |
What metrics were used to measure the WideResNet-101 (Spinal FC) model in the A Comprehensive Study on Torchvision Pre-trained Models for Fine-grained Inter-species Classification paper on the Bird-225 dataset? | Accuracy |
What metrics were used to measure the Pre trained wide-resnet-101 model in the ProgressiveSpinalNet architecture for FC layers paper on the Bird-225 dataset? | Accuracy |
What metrics were used to measure the WideResNet-101 model in the A Comprehensive Study on Torchvision Pre-trained Models for Fine-grained Inter-species Classification paper on the Bird-225 dataset? | Accuracy |
What metrics were used to measure the VGG-19bn (Spinal FC) model in the SpinalNet: Deep Neural Network with Gradual Input paper on the Bird-225 dataset? | Accuracy |
What metrics were used to measure the VGG-19bn model in the SpinalNet: Deep Neural Network with Gradual Input paper on the Bird-225 dataset? | Accuracy |
What metrics were used to measure the Assemble-ResNet-FGVC-50 model in the Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network paper on the SOP dataset? | Recall@1 |
What metrics were used to measure the EffNet-L2 (SAM) model in the Sharpness-Aware Minimization for Efficiently Improving Generalization paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the BiT-L (ResNet) model in the Big Transfer (BiT): General Visual Representation Learning paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the BiT-M (ResNet) model in the Big Transfer (BiT): General Visual Representation Learning paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the Assemble-ResNet-FGVC-50 model in the Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the ResNet-152-SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the ViT-B/16- SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the ViT-S/16- SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the Mixer-B/16- SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the ResNet-50-SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the Mixer-S/16- SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the ViT-B/16 model in the An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the ViT-L/16 model in the An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the ViT-H/14 model in the An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale paper on the Oxford-IIIT Pets dataset? | Accuracy, Top-1 Error Rate, PARAMS |
What metrics were used to measure the TASN model in the Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-grained Image Recognition paper on the iNaturalist dataset? | Top 1 Accuracy |
What metrics were used to measure the EffNet-L2 (SAM) model in the Sharpness-Aware Minimization for Efficiently Improving Generalization paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the ALIGN model in the Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the CSWin-L model in the Learning Multi-Subset of Classes for Fine-Grained Food Recognition paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the Grafit (RegNet-8GF) model in the Grafit: Learning fine-grained image representations with coarse labels paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the VOLO-D5 model in the Learning Multi-Subset of Classes for Fine-Grained Food Recognition paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the EfficientNet-B7 model in the EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the Assemble-ResNet-FGVC-50 model in the Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the NAT-M4 model in the Neural Architecture Transfer paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the NAT-M3 model in the Neural Architecture Transfer paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the NAT-M2 model in the Neural Architecture Transfer paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the NAT-M1 model in the Neural Architecture Transfer paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the ImageNet + iNat on WS-DAN model in the Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization paper on the Food-101 dataset? | Accuracy, FLOPS, PARAMS, Top 1 Accuracy |
What metrics were used to measure the VGG-5 model in the ProgressiveSpinalNet architecture for FC layers paper on the QMNIST dataset? | Accuracy |
What metrics were used to measure the Conviformer-B model in the Conviformers: Convolutionally guided Vision Transformer paper on the Herbarium 2022 dataset? | Test F1 score (private) |
What metrics were used to measure the VGG-5 model in the ProgressiveSpinalNet architecture for FC layers paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the CMAL-Net model in the Learn from Each Other to Classify Better: Cross-layer Mutual Attention Learning for Fine-grained Visual Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the TResNet-L + ML-Decoder model in the ML-Decoder: Scalable and Versatile Classification Head paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the DAT model in the Domain Adaptive Transfer Learning with Specialist Models paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ALIGN model in the Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the EffNet-L2 (SAM) model in the Sharpness-Aware Minimization for Efficiently Improving Generalization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the CAP model in the Context-aware Attentional Pooling (CAP) for Fine-grained Visual Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the AttNet & AffNet model in the Fine-Grained Visual Classification with Efficient End-to-end Localization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the CAL model in the Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the CCFR model in the Re-rank Coarse Classification with Local Region Enhanced Features for Fine-Grained Image Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the Inceptionv4 model in the Non-binary deep transfer learning for image classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the API-Net model in the Learning Attentive Pairwise Interaction for Fine-Grained Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the DCAL model in the Dual Cross-Attention Learning for Fine-Grained Visual Categorization and Object Re-Identification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the PART model in the Part-guided Relational Transformers for Fine-grained Visual Recognition paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the DenseNet161+MM+FRL model in the Learning Class Unique Features in Fine-Grained Visual Classification paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the PMG model in the Fine-Grained Visual Classification via Progressive Multi-Granularity Training of Jigsaw Patches paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the Multi Granularity model in the Your "Flamingo" is My "Bird": Fine-Grained, or Not paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the TBMSL-Net model in the Multi-branch and Multi-scale Attention Learning for Fine-Grained Visual Categorization paper on the Stanford Cars dataset? | Accuracy, FLOPS, PARAMS |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.