prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Prior-LT model in the Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the M2m model in the M2m: Imbalanced Classification via Major-to-minor Translation paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the DecTDE model in the Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the Class-balanced Focal Loss model in the Class-Balanced Loss Based on Effective Number of Samples paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the IBLLoss model in the Influence-Balanced Loss for Imbalanced Visual Classification paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the Class-balanced Resampling model in the Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the Class-balanced Reweighting model in the Class-Balanced Loss Based on Effective Number of Samples paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the Empirical Risk Minimization (ERM, CE) model in the Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the RISDA model in the Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification paper on the CIFAR-10-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the CDB-loss (3D- ResNeXt101) model in the Class-Wise Difficulty-Balanced Loss for Solving Class-Imbalance paper on the EGTEA dataset? | Average Precision, Average Recall |
What metrics were used to measure the CB Loss model in the Class-Balanced Loss Based on Effective Number of Samples paper on the EGTEA dataset? | Average Precision, Average Recall |
What metrics were used to measure the Focal loss (3D- ResNeXt101) model in the Focal Loss for Dense Object Detection paper on the EGTEA dataset? | Average Precision, Average Recall |
What metrics were used to measure the TailCalibX model in the Feature Generation for Long-tail Classification paper on the mini-ImageNet-LT dataset? | Error Rate |
What metrics were used to measure the LMPT(ViT-B/16) model in the LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the CLIP(ViT-B/16) model in the Learning Transferable Visual Models From Natural Language Supervision paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the LMPT(ResNet-50) model in the LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed Multi-Label Visual Recognition paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the CLIP(ResNet-50) model in the Learning Transferable Visual Models From Natural Language Supervision paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the LTML(ResNet-50) model in the Long-Tailed Multi-Label Visual Recognition by Collaborative Training on Uniform and Re-Balanced Samplings paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the PG Loss(ResNet-50) model in the Probability Guided Loss for Long-Tailed Multi-Label Image Classification paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the DB Focal(ResNet-50) model in the Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the RS(ResNet-50) model in the Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the CB Focal(ResNet-50) model in the Class-Balanced Loss Based on Effective Number of Samples paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the Focal Loss(ResNet-50) model in the Focal Loss for Dense Object Detection paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the OLTR(ResNet-50) model in the Large-Scale Long-Tailed Recognition in an Open World paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the LDAM(ResNet-50) model in the Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the ML-GCN(ResNet-50) model in the Multi-Label Image Recognition with Graph Convolutional Networks paper on the VOC-MLT dataset? | Average mAP |
What metrics were used to measure the PaCo + SAM model in the Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data paper on the CIFAR-100-LT (ρ=200) dataset? | Error Rate |
What metrics were used to measure the MetaSAug-LDAM model in the MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition paper on the CIFAR-100-LT (ρ=200) dataset? | Error Rate |
What metrics were used to measure the GLMC + SAM model in the Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data paper on the CIFAR-10-LT (ρ=50) dataset? | Error Rate |
What metrics were used to measure the OPeN (WideResNet-28-10) model in the Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise Images paper on the CIFAR-10-LT (ρ=50) dataset? | Error Rate |
What metrics were used to measure the MDCS model in the MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition paper on the CIFAR-10-LT (ρ=50) dataset? | Error Rate |
What metrics were used to measure the NCL(ResNet32) model in the Nested Collaborative Learning for Long-Tailed Visual Recognition paper on the CIFAR-10-LT (ρ=50) dataset? | Error Rate |
What metrics were used to measure the MetaSAug-LDAM model in the MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition paper on the CIFAR-10-LT (ρ=50) dataset? | Error Rate |
What metrics were used to measure the LPT model in the LPT: Long-tailed Prompt Tuning for Image Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the PEL (ViT-B/16, ImageNet-21K pre-training) model in the Parameter-Efficient Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the PEL (ViT-B/16, CLIP) model in the Parameter-Efficient Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the VPT model in the Visual Prompt Tuning paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the BALLAD (ViT-B/16) model in the A Simple Long-Tailed Recognition Baseline via Vision-Language Model paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the GLMC + SAM model in the Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the GLMC+MaxNorm (ResNet-32, channel x4) model in the Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the GLMC (ResNet-32, channel x4) model in the Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the MDCS model in the MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the NCL* + WGCC (ensemble) model in the Weight-guided class complementing for long-tailed image recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the OPeN (WideResNet-28-10) model in the Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise Images paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the GML (ResNet-32) model in the Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the BCL(ResNet-32) model in the Balanced Contrastive Learning for Long-Tailed Visual Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the LTR-weight-balancing model in the Long-Tailed Recognition via Weight Balancing paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the PC model in the Learning Prototype Classifiers for Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the NCL(ResNet32) model in the Nested Collaborative Learning for Long-Tailed Visual Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the VS + ADRW + TLA model in the A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment for Imbalanced Learning paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the PaCo + SAM model in the Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Difficulty-Net model in the Difficulty-Net: Learning to Predict Difficulty for Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Paco + BatchFormer model in the BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the GLAG model in the Long-Tailed Classification with Gradual Balanced Loss and Adaptive Feature Generation paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Balanced + BatchFormer model in the BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the PCL model in the Parametric Contrastive Learning paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the RIDE + CMO + Curvature Regularization model in the Curvature-Balanced Feature Manifold Learning for Long-Tailed Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the RIDE 3 experts + CMO model in the The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the TADE model in the Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the TLC (4 Experts) model in the Trustworthy Long-Tailed Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the ACE (4 experts) model in the ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the RIDE+distill model in the Long-tailed Recognition by Routing Diverse Distribution-Aware Experts paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the GCL model in the Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the MetaSAug-LDAM model in the MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the RIDE model in the Long-tailed Recognition by Routing Diverse Distribution-Aware Experts paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the DRO-LT model in the Distributional Robustness Loss for Long-tail Learning paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the LDAM-DRW + CMO model in the The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the MiSLAS model in the Improving Calibration for Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Balanced Softmax + CMO model in the The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the VS + SAM model in the Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the CBD+TailCalibX model in the Feature Generation for Long-tail Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Prior-LT model in the Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the OTLM+CE (Resnet-32) model in the Optimal Transport for Long-Tailed Recognition with Learnable Cost Matrix paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the SSD model in the Self Supervision to Distillation for Long-Tailed Visual Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the MBJ model in the Memory-based Jitter: Improving Visual Recognition on Long-tailed Data with Diversity In Memory paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the UniMix+Bayias (ResNet-32) model in the Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the LADE model in the Disentangling Label Distribution for Long-tailed Visual Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the ETF Classifier + DR (Resnet) model in the Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network? paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the FBL (Resnet-32) model in the Feature-Balanced Loss for Long-Tailed Visual Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Hybrid-PSC model in the Contrastive Learning based Hybrid Networks for Long-Tailed Image Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the LDAM-DRW-RSG model in the RSG: A Simple but Effective Module for Learning Imbalanced Datasets paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the CE+DRS+GIT model in the Do Deep Networks Transfer Invariances Across Classes? paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the TSC(ResNet-32) model in the Targeted Supervised Contrastive Learning for Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the LDAM-DRW + WGCC model in the Weight-guided class complementing for long-tailed image recognition paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the smDRAGON model in the From Generalized zero-shot learning to long-tail with class descriptors paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the LDAM-DRW + SSP model in the Rethinking the Value of Labels for Improving Class-Imbalanced Learning paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the CE-DRW-IC model in the Posterior Re-calibration for Imbalanced Datasets paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the CDB-loss model in the Class-Wise Difficulty-Balanced Loss for Solving Class-Imbalance paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the ELP model in the A Simple Episodic Linear Probe Improves Visual Recognition in the Wild paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the LDAM-DRW model in the Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the CE-DRW model in the The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Cross-Entropy + Curvature Regularization model in the Curvature-Balanced Feature Manifold Learning for Long-Tailed Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the IBLLoss model in the Influence-Balanced Loss for Imbalanced Visual Classification paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Cross-Entropy (CE) model in the Class-Balanced Loss Based on Effective Number of Samples paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the Cross-Entropy (CE) model in the Revisiting Long-tailed Image Classification: Survey and Benchmarks with New Evaluation Metrics paper on the CIFAR-100-LT (ρ=100) dataset? | Error Rate |
What metrics were used to measure the PEL (ViT-B/16, ImageNet-21K pre-training) model in the Parameter-Efficient Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the LPT model in the LPT: Long-tailed Prompt Tuning for Image Classification paper on the CIFAR-100-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the VPT model in the Visual Prompt Tuning paper on the CIFAR-100-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the PEL (ViT-B/16, CLIP) model in the Parameter-Efficient Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=10) dataset? | Error Rate |
What metrics were used to measure the GLMC+MaxNorm (ResNet-32, channel x4) model in the Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions paper on the CIFAR-100-LT (ρ=10) dataset? | Error Rate |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.