prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the ResNet-50 (IN-C_contrast) model in the Measuring Robustness to Natural Distribution Shifts in Image Classification paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (IN-C_jpeg_compression) model in the Measuring Robustness to Natural Distribution Shifts in Image Classification paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (IN-C_gaussian_noise) model in the Measuring Robustness to Natural Distribution Shifts in Image Classification paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-19 BN model in the Very Deep Convolutional Networks for Large-Scale Image Recognition paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (IN-C_frost) model in the Measuring Robustness to Natural Distribution Shifts in Image Classification paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the MobileNetV2 (lpf3) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (IN-C_fog_aws) model in the Measuring Robustness to Natural Distribution Shifts in Image Classification paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the MobileNetV2 (lpf5) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (IN-C_motion_blur) model in the Measuring Robustness to Natural Distribution Shifts in Image Classification paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-18 (lpf3) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the MobileNetV2 (lpf2) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-18 (lpf2) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-16 (lpf3) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the EfficientNet-B0 (autoaug) model in the AutoAugment: Learning Augmentation Policies from Data paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-19 model in the Very Deep Convolutional Networks for Large-Scale Image Recognition paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-16 model in the Very Deep Convolutional Networks for Large-Scale Image Recognition paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-18 (lpf5) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-16 (lpf5) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the EfficientNet-B0 model in the EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-13 BN model in the Very Deep Convolutional Networks for Large-Scale Image Recognition paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-16 (lpf2) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-11 BN model in the Very Deep Convolutional Networks for Large-Scale Image Recognition paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (IN-C_zoom_blur) model in the Measuring Robustness to Natural Distribution Shifts in Image Classification paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-13 model in the Very Deep Convolutional Networks for Large-Scale Image Recognition paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the VGG-11 model in the Very Deep Convolutional Networks for Large-Scale Image Recognition paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (IN-C_greyscale) model in the Measuring Robustness to Natural Distribution Shifts in Image Classification paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (adv-train-free) model in the Adversarial Training for Free! paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the ResNet-50 (SIN) model in the ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the AlexNet (lpf3) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the AlexNet (lpf2) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the AlexNet (lpf5) model in the Making Convolutional Networks Shift-Invariant Again paper on the VizWiz-Classification dataset?
Accuracy - All Images, Accuracy - Corrupted Images, Accuracy - Clean Images
What metrics were used to measure the CDDMSL model in the Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment paper on the Watercolor2k dataset?
MAP
What metrics were used to measure the MIRO (RegNetY-16GF, SWAD) model in the Domain Generalization by Mutual-Information Regularization with Pre-trained Models paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the CAR-FT (CLIP, ViT-B/16) model in the Context-Aware Robust Fine-Tuning paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the Ensemble of Averages (RegNetY-16GF) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the SIMPLE+ model in the SIMPLE: Specialized Model-Sample Matching for Domain Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the SIMPLE model in the SIMPLE: Specialized Model-Sample Matching for Domain Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the SEDGE+ model in the Domain Generalization using Pretrained Models without Fine-tuning paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the SEDGE model in the Domain Generalization using Pretrained Models without Fine-tuning paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the CADG model in the CADG: A Model Based on Cross Attention for Domain Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the Ensemble of Averages (ResNeXt-50 32x4d) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the MIRO (ResNet-50, SWAD) model in the Domain Generalization by Mutual-Information Regularization with Pre-trained Models paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the Ensemble of Averages (ResNet-50) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the Model Ratatouille model in the Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the VNE (ResNet-50, SWAD) model in the VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the AdaClust (ResNet-50, SWAD) model in the Adaptive Methods for Aggregated Domain Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the SWAD (ResNet-50) model in the SWAD: Domain Generalization by Seeking Flat Minima paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the POEM model in the POEM: Polarization of Embeddings for Domain-Invariant Representations paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the DREAME model in the Automated Domain Discovery from Multiple Sources to Improve Zero-Shot Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the GMoE-S/16 model in the Sparse Mixture-of-Experts are Domain Generalizable Learners paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the AdaClust (ResNet-50) model in the Adaptive Methods for Aggregated Domain Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the Fishr(ResNet-50) model in the Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization paper on the TerraIncognita dataset?
Average Accuracy
What metrics were used to measure the Model soups (BASIC-L) model in the Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the Model soups (ViT-G/14) model in the Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the CAR-FT (CLIP, ViT-L/14@336px) model in the Context-Aware Robust Fine-Tuning paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the MAE (ViT-H, 448) model in the Masked Autoencoders Are Scalable Vision Learners paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the FAN-Hybrid-L(IN-21K, 384) model in the Understanding The Robustness in Vision Transformers paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the ConvNeXt-XL (Im21k, 384) model in the A ConvNet for the 2020s paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the MAE+DAT (ViT-H) model in the Enhance the Visual Representation via Discrete Adversarial Training paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the Pyramid Adversarial Training Improves ViT (Im21k) model in the Pyramid Adversarial Training Improves ViT Performance paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the FAN-L-Hybrid+STL model in the Fully Attentional Networks with Self-emerging Token Labeling paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the Pyramid Adversarial Training Improves ViT (384x384) model in the Pyramid Adversarial Training Improves ViT Performance paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the Sequencer2D-L model in the Sequencer: Deep LSTM for Image Classification paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the Diffusion Classifier model in the Your Diffusion Model is Secretly a Zero-Shot Classifier paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the RVT-B* model in the Towards Robust Vision Transformer paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the RVT-S* model in the Towards Robust Vision Transformer paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the RVT-Ti* model in the Towards Robust Vision Transformer paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the GFNet-S model in the Global Filter Networks for Image Classification paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the CutMix+MoEx (ResNet-50) model in the On Feature Normalization and Data Augmentation paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the CutMix (ResNet-50) model in the CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the Mixup (ResNet-50) model in the mixup: Beyond Empirical Risk Minimization paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the Cutout (ResNet-50) model in the Improved Regularization of Convolutional Neural Networks with Cutout paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the ResNet-50 (300 Epochs) model in the Deep Residual Learning for Image Recognition paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the Stylized ImageNet (ResNet-50) model in the ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the ResNet-50 model in the Natural Adversarial Examples paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the DINOv2 (ViT-g/14, frozen model, linear eval) model in the DINOv2: Learning Robust Visual Features without Supervision paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the MAE+DAT (ViT-H) model in the Enhance the Visual Representation via Discrete Adversarial Training paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the DINOv2 (ViT-L/14, frozen model, linear eval) model in the DINOv2: Learning Robust Visual Features without Supervision paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the MAE (ViT-H) model in the Masked Autoencoders Are Scalable Vision Learners paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the FAN-L-Hybrid (IN-22k) model in the Understanding The Robustness in Vision Transformers paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the Pyramid Adversarial Training Improves ViT (Im21k) model in the Pyramid Adversarial Training Improves ViT Performance paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the VOLO-D5+HAT model in the Improving Vision Transformers by Revisiting High-frequency Components paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the DiscreteViT (Im21k) model in the Discrete Representations Strengthen Vision Transformer Robustness paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the ConvNeXt-XL (Im21k) (augmentation overlap with ImageNet-C) model in the A ConvNet for the 2020s paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the GPaCo (ViT-L) model in the Generalized Parametric Contrastive Learning paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the FAN-B-Hybrid (IN-22k) model in the Understanding The Robustness in Vision Transformers paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the Pyramid Adversarial Training Improves ViT model in the Pyramid Adversarial Training Improves ViT Performance paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the FAN-L-Hybrid+STL model in the Fully Attentional Networks with Self-emerging Token Labeling paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the QualNet (ResNeXt101) model in the Quality-Agnostic Image Recognition via Invertible Decoder paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the DINOv2 (ViT-B/14, frozen model, linear eval) model in the DINOv2: Learning Robust Visual Features without Supervision paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the FAN-L-Hybrid model in the Understanding The Robustness in Vision Transformers paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the DrViT model in the Discrete Representations Strengthen Vision Transformer Robustness paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the DiscreteViT model in the Discrete Representations Strengthen Vision Transformer Robustness paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the RVT-B* model in the Towards Robust Vision Transformer paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the Sequencer2D-L model in the Sequencer: Deep LSTM for Image Classification paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the RVT-S* model in the Towards Robust Vision Transformer paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the QualNet (ResNet-50) model in the Quality-Agnostic Image Recognition via Invertible Decoder paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params
What metrics were used to measure the PRIME + DeepAugment (ResNet-50) model in the PRIME: A few primitives can boost robustness to common corruptions paper on the ImageNet-C dataset?
mean Corruption Error (mCE), Top 1 Accuracy, Number of params