prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the GLMC (ResNet-32, channel x4) model in the Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the PC model in the Learning Prototype Classifiers for Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the LTR-weight-balancing model in the Long-Tailed Recognition via Weight Balancing paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the GML (ResNet-32) model in the Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth Labels paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the Difficulty-Net model in the Difficulty-Net: Learning to Predict Difficulty for Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the BCL+CUDA model in the CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the GLAG model in the Long-Tailed Classification with Gradual Balanced Loss and Adaptive Feature Generation paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the LCReg model in the Long-tailed Recognition by Learning from Latent Categories paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the TADE model in the Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the DRO-LT model in the Distributional Robustness Loss for Long-tail Learning paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the MiSLAS model in the Improving Calibration for Long-Tailed Recognition paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the SMC model in the Supervised Contrastive Learning on Blended Images for Long-tailed Recognition paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the Hybrid-PSC model in the Contrastive Learning based Hybrid Networks for Long-Tailed Image Classification paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the LADE model in the Disentangling Label Distribution for Long-tailed Visual Recognition paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the RIDE + CMO + Curvature Regularization model in the Curvature-Balanced Feature Manifold Learning for Long-Tailed Classification paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the MetaSAug-LDAM model in the MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the UniMix+Bayias (ResNet-32) model in the Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the CBD+TailCalibX model in the Feature Generation for Long-tail Classification paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the ELP model in the A Simple Episodic Linear Probe Improves Visual Recognition in the Wild paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the LDAM-DRW + SSP model in the Rethinking the Value of Labels for Improving Class-Imbalanced Learning paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the smDRAGON model in the From Generalized zero-shot learning to long-tail with class descriptors paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the CDB-loss model in the Class-Wise Difficulty-Balanced Loss for Solving Class-Imbalance paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the LDAM-DRW model in the Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the CE-DRW-IC model in the Posterior Re-calibration for Imbalanced Datasets paper on the CIFAR-100-LT (ρ=10) dataset?
Error Rate
What metrics were used to measure the Decoupling (cRT) model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Reweighted LDAM-DRW model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Class-balanced LDAM-DRW model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Reweighted LDAM model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Reweighted Focal Loss model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Decoupling (tau-norm) model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Class-balanced Softmax model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Class-balanced LDAM model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Reweighted Softmax model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Class-balanced Focal Loss model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the MixUp model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Focal Loss model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Softmax model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the Balanced-MixUp model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the LDAM model in the Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New Benchmark Study paper on the MIMIC-CXR-LT dataset?
Balanced Accuracy
What metrics were used to measure the GLMC+MaxNorm (ResNet-32, channel x4) model in the Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the GLMC + SAM model in the Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the LPT model in the LPT: Long-tailed Prompt Tuning for Image Classification paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the GLMC (ResNet-32, channel x4) model in the Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the OPeN (WideResNet-28-10) model in the Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise Images paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the SimSiam+rwSAM model in the Self-supervised Learning is More Robust to Dataset Imbalance paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the NCL(ResNet32) model in the Nested Collaborative Learning for Long-Tailed Visual Recognition paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the NCL* + WGCC (ensemble) model in the Weight-guided class complementing for long-tailed image recognition paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the TADE model in the Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the GCL model in the Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the FBL (ResNet-32) model in the Feature-Balanced Loss for Long-Tailed Visual Recognition paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the VS + SAM model in the Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the MiSLAS model in the Improving Calibration for Long-Tailed Recognition paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the ACE (4 experts) model in the ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the MetaSAug-LDAM model in the MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the TLC (4 experts) model in the Trustworthy Long-Tailed Classification paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the smDRAGON model in the From Generalized zero-shot learning to long-tail with class descriptors paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the CE+DRS+GIT model in the Do Deep Networks Transfer Invariances Across Classes? paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the TSC(ResNet-32) model in the Targeted Supervised Contrastive Learning for Long-Tailed Recognition paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the ELP model in the A Simple Episodic Linear Probe Improves Visual Recognition in the Wild paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the LDAM-DRW + SSP model in the Rethinking the Value of Labels for Improving Class-Imbalanced Learning paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the LDAM-DRW model in the Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the ETF Classifier + DR (Resnet) model in the Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network? paper on the CIFAR-10-LT (ρ=100) dataset?
Error Rate
What metrics were used to measure the OPeN (WideResNet-28-10) model in the Pure Noise to the Rescue of Insufficient Data: Improving Imbalanced Classification by Training on Random Noise Images paper on the CelebA-5 dataset?
Error Rate
What metrics were used to measure the Character-BERT+RS model in the Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset paper on the Lot-insts dataset?
Macro-F1
What metrics were used to measure the RIDE + IFL model in the Invariant Feature Learning for Generalized Long-Tailed Classification paper on the ImageNet-GLT dataset?
Accuracy
What metrics were used to measure the RandAug + IFL model in the Invariant Feature Learning for Generalized Long-Tailed Classification paper on the ImageNet-GLT dataset?
Accuracy
What metrics were used to measure the Logit-Adj + IFL model in the Invariant Feature Learning for Generalized Long-Tailed Classification paper on the ImageNet-GLT dataset?
Accuracy
What metrics were used to measure the BLSoftmax + IFL model in the Invariant Feature Learning for Generalized Long-Tailed Classification paper on the ImageNet-GLT dataset?
Accuracy
What metrics were used to measure the LDAM model in the Invariant Feature Learning for Generalized Long-Tailed Classification paper on the ImageNet-GLT dataset?
Accuracy
What metrics were used to measure the cRT model in the Invariant Feature Learning for Generalized Long-Tailed Classification paper on the ImageNet-GLT dataset?
Accuracy
What metrics were used to measure the MVCN model in the Better Generalized Few-Shot Learning Even Without Base Data paper on the CUB dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the CADA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the CUB dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the CA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the CUB dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the DA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the CUB dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the REVISE model in the Learning Robust Visual-Semantic Embeddings paper on the CUB dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the DRAGON model in the From Generalized zero-shot learning to long-tail with class descriptors paper on the SUN dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots)
What metrics were used to measure the DA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the SUN dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots)
What metrics were used to measure the CADA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the SUN dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots)
What metrics were used to measure the CA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the SUN dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots)
What metrics were used to measure the REVISE model in the Learning Robust Visual-Semantic Embeddings paper on the SUN dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots)
What metrics were used to measure the MVCN model in the Better Generalized Few-Shot Learning Even Without Base Data paper on the AwA2 dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the CADA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the AwA2 dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the DA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the AwA2 dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the DRAGON model in the From Generalized zero-shot learning to long-tail with class descriptors paper on the AwA2 dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the CA-VAE model in the Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders paper on the AwA2 dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the REVISE model in the Learning Robust Visual-Semantic Embeddings paper on the AwA2 dataset?
Per-Class Accuracy (1-shot), Per-Class Accuracy (2-shots), Per-Class Accuracy (5-shots), Per-Class Accuracy (10-shots), Per-Class Accuracy (20-shots)
What metrics were used to measure the BasicVSR++ model in the BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment paper on the MFQE v2 dataset?
Incremental PSNR, Parameters(M)
What metrics were used to measure the S2SVR model in the Unsupervised Flow-Aligned Sequence-to-Sequence Learning for Video Restoration paper on the MFQE v2 dataset?
Incremental PSNR, Parameters(M)
What metrics were used to measure the STDF model in the Spatio-temporal deformable convolution for compressed video quality enhancement paper on the MFQE v2 dataset?
Incremental PSNR, Parameters(M)
What metrics were used to measure the EDVR model in the EDVR: Video Restoration with Enhanced Deformable Convolutional Networks paper on the MFQE v2 dataset?
Incremental PSNR, Parameters(M)
What metrics were used to measure the MFQE 2.0 model in the MFQE 2.0: A New Approach for Multi-frame Quality Enhancement on Compressed Video paper on the MFQE v2 dataset?
Incremental PSNR, Parameters(M)
What metrics were used to measure the MFQE 1.0 model in the Multi-Frame Quality Enhancement for Compressed Video paper on the MFQE v2 dataset?
Incremental PSNR, Parameters(M)
What metrics were used to measure the TransC (bern) model in the Differentiating Concepts and Instances for Knowledge Graph Embedding paper on the YAGO39K dataset?
Accuracy, F1-Score, Precision, Recall
What metrics were used to measure the DS-UNet model in the Attention to Fires: Multi-Channel Deep Learning Models for Wildfire Severity Prediction paper on the Burned Area Delineation from Satellite Imagery dataset?
RMSE
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the UniVL + MELTR model in the MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the VideoCLIP model in the VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the MDMMT-2 model in the MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the TACo model in the TACo: Token-aware Cascade Contrastive Learning for Video-Text Alignment paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank
What metrics were used to measure the UniVL model in the UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation paper on the YouCook2 dataset?
text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video Median Rank, text-to-video Mean Rank