prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the ResMLP-12 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the FixInceptionResNet-V2 model in the Fixing the train-test resolution discrepancy paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the AutoAugment model in the AutoAugment: Learning Augmentation Policies from Data paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the PC Bilinear CNN model in the Pairwise Confusion for Fine-Grained Visual Classification paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the AutoFormer-S | 384 model in the AutoFormer: Searching Transformers for Visual Recognition paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the CCT-14/7x2 model in the Escaping the Big Data Paradigm with Compact Transformers paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the ViT-L/16 model in the An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the ViT-H/14 model in the An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the NAT-M1 model in the Neural Architecture Transfer paper on the Oxford 102 Flowers dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Top 1 Accuracy
What metrics were used to measure the EnGraf-Net101 (G=4, H=1) model in the EnGraf-Net: Multiple Granularity Branch Network with Fine-Coarse Graft Grained for Classification Task paper on the FGVC-Aircraft dataset?
Accuracy
What metrics were used to measure the EffNet-L2 (SAM) model in the Sharpness-Aware Minimization for Efficiently Improving Generalization paper on the Birdsnap dataset?
Accuracy
What metrics were used to measure the FixSENet-154 model in the Fixing the train-test resolution discrepancy paper on the Birdsnap dataset?
Accuracy
What metrics were used to measure the EfficientNet-B7 model in the EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks paper on the Birdsnap dataset?
Accuracy
What metrics were used to measure the GPIPE model in the GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism paper on the Birdsnap dataset?
Accuracy
What metrics were used to measure the NNCLR model in the With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations paper on the Birdsnap dataset?
Accuracy
What metrics were used to measure the Pre trained wide-resnet-101 model in the ProgressiveSpinalNet architecture for FC layers paper on the STL-10 dataset?
Accuracy
What metrics were used to measure the DINOv2 (ViT-g/14, frozen model, linear eval) model in the DINOv2: Learning Robust Visual Features without Supervision paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the ALIGN model in the Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the EfficientNet-B7 model in the EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the µ2Net (ViT-L/16) model in the An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the IELT model in the Fine-Grained Visual Classification via Internal Ensemble Learning Transformer paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the Bamboo (ViT-B/16) model in the Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the TNT-B model in the Transformer in Transformer paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the AutoFormer-S | 384 model in the AutoFormer: Searching Transformers for Visual Recognition paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the FixSENet-154 model in the Fixing the train-test resolution discrepancy paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the NAT-M4 model in the Neural Architecture Transfer paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the NAT-M3 model in the Neural Architecture Transfer paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the NAT-M2 model in the Neural Architecture Transfer paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the AutoAugment model in the AutoAugment: Learning Augmentation Policies from Data paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the ViT R26 + S/32 ( Augmented) model in the Towards Fine-grained Image Classification with Generative Adversarial Networks and Facial Landmark Detection paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the NAT-M1 model in the Neural Architecture Transfer paper on the Oxford-IIIT Pet Dataset dataset?
Accuracy, Top-1 Error Rate, FLOPS, PARAMS, Accuracy (%)
What metrics were used to measure the HERBS model in the Fine-grained Visual Classification with High-temperature Refinement and Background Suppression paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the PIM model in the A Novel Plug-in Module for Fine-Grained Visual Classification paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the CAP model in the Context-aware Attentional Pooling (CAP) for Fine-grained Visual Classification paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the TransFG model in the TransFG: A Transformer Architecture for Fine-grained Recognition paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the SWAG (ViT H/14) model in the Revisiting Weakly Supervised Pre-Training of Visual Perception Models paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the FFVT model in the Feature Fusion Vision Transformer for Fine-Grained Visual Categorization paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the DATL model in the Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the CAL model in the Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the HOI-Net model in the High-Order-Interaction for weakly supervised Fine-Grained Visual Categorization paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the TBMSL-Net model in the Multi-branch and Multi-scale Attention Learning for Fine-Grained Visual Categorization paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the FBSD model in the Feature Boosting, Suppression, and Diversification for Fine-Grained Visual Classification paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the WS-DAN model in the See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the ELP model in the A Simple Episodic Linear Probe Improves Visual Recognition in the Wild paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the MPN-COV model in the Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the FixSENet-154 model in the Fixing the train-test resolution discrepancy paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the DenseNet161+MM+FRL model in the Learning Class Unique Features in Fine-Grained Visual Classification paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the LIO model in the Look-into-Object: Self-supervised Structure Modeling for Object Recognition paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the TASN model in the Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-grained Image Recognition paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the Nts-Net model in the Are These Birds Similar: Learning Branched Networks for Fine-grained Representations paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the DFL-CNN model in the Learning a Discriminative Filter Bank within a CNN for Fine-grained Recognition paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the PC model in the Pairwise Confusion for Fine-Grained Visual Classification paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the MACNN model in the Learning Multi-Attention Convolutional Neural Network for Fine-Grained Image Recognition paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the Basel.+LSRO model in the Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the BYOL+CVSA (ResNet-50) model in the Exploring Localization for Self-supervised Fine-grained Contrastive Learning paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the Deformable Part Descriptors model in the Deformable Part Descriptors for Fine-grained Recognition and Attribute Prediction paper on the CUB-200-2011 dataset?
Accuracy
What metrics were used to measure the Phraseformer(BERT, ExEm(ft)) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the SemEval2017 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the Phraseformer(BERT, ExEm(w2v)) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the SemEval2017 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the Phraseformer(BERT, Node2vec) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the SemEval2017 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the Phraseformer(BERT, DeepWalk) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the SemEval2017 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the FRAKE model in the FRAKE: Fusional Real-time Automatic Keyword Extraction paper on the SemEval2017 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the Phraseformer(BERT, ExEm(ft)) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the Inspec dataset?
F1 score, Precision@10, Recall @ 10
What metrics were used to measure the Phraseformer(BERT, ExEm(w2v)) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the Inspec dataset?
F1 score, Precision@10, Recall @ 10
What metrics were used to measure the Phraseformer(BERT, Node2vec) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the Inspec dataset?
F1 score, Precision@10, Recall @ 10
What metrics were used to measure the Phraseformer(BERT, DeepWalk) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the Inspec dataset?
F1 score, Precision@10, Recall @ 10
What metrics were used to measure the FRAKE model in the FRAKE: Fusional Real-time Automatic Keyword Extraction paper on the Inspec dataset?
F1 score, Precision@10, Recall @ 10
What metrics were used to measure the Phraseformer(BERT, ExEm(ft)) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the SemEval 2010 Task 8 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the Phraseformer(BERT, ExEm(w2v)) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the SemEval 2010 Task 8 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the Phraseformer(BERT, Node2vec) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the SemEval 2010 Task 8 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the Phraseformer(BERT, DeepWalk) model in the Phraseformer: Multimodal Key-phrase Extraction using Transformer and Graph Embedding paper on the SemEval 2010 Task 8 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the FRAKE model in the FRAKE: Fusional Real-time Automatic Keyword Extraction paper on the SemEval 2010 Task 8 dataset?
F1 score, Precision@10, Recall@10
What metrics were used to measure the UNS model in the Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization paper on the ICSI Meeting Corpus dataset?
ROUGE-1 F1
What metrics were used to measure the UNS model in the Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization paper on the AMI Meeting Corpus dataset?
ROUGE-1 F1
What metrics were used to measure the PolyNet model in the PolyNet: Polynomial Neural Network for 3D Shape Recognition with PolyShape Representation paper on the ModelNet10 dataset?
Accuracy
What metrics were used to measure the ORION model in the Orientation-boosted Voxel Nets for 3D Object Recognition paper on the ModelNet10 dataset?
Accuracy
What metrics were used to measure the G3DNet-18 SVM, Fine-Tuned, Vote model in the General-Purpose Deep Point Cloud Feature Extractor paper on the ModelNet10 dataset?
Accuracy
What metrics were used to measure the ECC (12 votes) model in the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper on the ModelNet10 dataset?
Accuracy
What metrics were used to measure the Ours model in the Exploiting Inductive Bias in Transformer for Point Cloud Classification and Segmentation paper on the ModelNet40 dataset?
Classification Accuracy, Accuracy
What metrics were used to measure the G3DNet-18 MLP, Fine-Tuned, Vote model in the General-Purpose Deep Point Cloud Feature Extractor paper on the ModelNet40 dataset?
Classification Accuracy, Accuracy
What metrics were used to measure the O-CNN(6) model in the O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis paper on the ModelNet40 dataset?
Classification Accuracy, Accuracy
What metrics were used to measure the 3D-PointCapsNet model in the 3D Point Capsule Networks paper on the ModelNet40 dataset?
Classification Accuracy, Accuracy
What metrics were used to measure the Spherical Kernel model in the Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds paper on the ModelNet40 dataset?
Classification Accuracy, Accuracy
What metrics were used to measure the ECC (12 votes) model in the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper on the ModelNet40 dataset?
Classification Accuracy, Accuracy
What metrics were used to measure the SceneGraphFusion model in the SceneGraphFusion: Incremental 3D Scene Graph Prediction from RGB-D Sequences paper on the 3R-Scan dataset?
Top-10 Accuracy, Top-5 Accuracy
What metrics were used to measure the 3DSSG [Wald2020_3dssg] model in the SceneGraphFusion: Incremental 3D Scene Graph Prediction from RGB-D Sequences paper on the 3R-Scan dataset?
Top-10 Accuracy, Top-5 Accuracy
What metrics were used to measure the TesseTrack model in the TesseTrack: End-to-End Learnable Multi-Person Articulated 3D Pose Tracking paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the PRGN model in the Graph-Based 3D Multi-Person Pose Estimation Using Multi-View Images paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the MvP model in the Direct Multi-view Multi-person 3D Pose Estimation paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the PlaneSweepPose model in the Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the VoxelPose model in the VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild Environment paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the Faster VoxelPose model in the Faster VoxelPose: Real-time 3D Human Pose Estimation by Orthographic Projection paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the VoxelTrack model in the VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the Wild paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the QuickPose model in the QuickPose: Real-time Multi-view Multi-person Pose Estimation in Crowded Scenes paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the Light3DPose model in the Light3DPose: Real-time Multi-Person 3D PoseEstimation from Multiple Views paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the IVT (f=5) model in the IVT: An End-to-End Instance-guided Video Transformer for 3D Pose Estimation paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the PIRN model in the Permutation-Invariant Relational Network for Multi-person 3D Pose Estimation paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the MVG model in the Multi-person 3D Pose Estimation in Crowded Scenes Based on Multi-View Geometry paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the HMOR model in the HMOR: Hierarchical Multi-Person Ordinal Relations for Monocular Multi-Person 3D Pose Estimation paper on the Panoptic dataset?
Average MPJPE (mm)
What metrics were used to measure the DAS model in the Distribution-Aware Single-Stage Models for Multi-Person 3D Pose Estimation paper on the Panoptic dataset?
Average MPJPE (mm)