prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the GFNet-S model in the Global Filter Networks for Image Classification paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the DINOv2 (ViT-S/14, frozen model, linear eval) model in the DINOv2: Learning Robust Visual Features without Supervision paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the PRIME with JSD (ResNet-50) model in the PRIME: A few primitives can boost robustness to common corruptions paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the RVT-Ti* model in the Towards Robust Vision Transformer paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the PRIME (ResNet-50) model in the PRIME: A few primitives can boost robustness to common corruptions paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the APR-SP + DeepAugment (ResNet-50) model in the Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the DeepAugment (ResNet-50) model in the The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the APR-SP (ResNet-50) model in the Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the AugMix (ResNet-50) model in the AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the Stylized ImageNet (ResNet-50) model in the ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the Group-wise Inhibition (ResNet-50) model in the Group-wise Inhibition based Feature Regularization for Robust Classification paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the ResNet-50 model in the Benchmarking Neural Network Robustness to Common Corruptions and Perturbations paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the ViT-B/16-SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the ResNet-152x2-SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the Mixer-B/8-SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the ImageNet-C dataset? | mean Corruption Error (mCE), Top 1 Accuracy, Number of params |
What metrics were used to measure the HRDA model in the Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the DAFormer model in the Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the SHADE model in the Style-Hallucinated Dual Consistency Learning for Domain Generalized Semantic Segmentation paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the SAN-SAW model in the Semantic-Aware Domain Generalized Segmentation paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the AdvStyle model in the Adversarial Style Augmentation for Domain Generalized Urban-Scene Segmentation paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the GTR model in the Global and Local Texture Randomization for Synthetic-to-Real Semantic Segmentation paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the DRPC model in the Domain Randomization and Pyramid Consistency: Simulation-to-Real Generalization without Accessing Target Domain Data paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the RobustNet model in the RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the IBN model in the Two at Once: Enhancing Learning and Generalization Capacities via IBN-Net paper on the GTA-to-Avg(Cityscapes,BDD,Mapillary) dataset? | mIoU |
What metrics were used to measure the GtA-SFDA Source-Only (DeepLabv2-ResNet101) model in the Generalize then Adapt: Source-Free Domain Adaptive Semantic Segmentation paper on the GTA5-to-Cityscapes dataset? | mIoU |
What metrics were used to measure the PromptStyler (CLIP, ViT-L/14) model in the PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the SIMPLE+ model in the SIMPLE: Specialized Model-Sample Matching for Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the CAR-FT (CLIP, ViT-B/16) model in the Context-Aware Robust Fine-Tuning paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the SIMPLE model in the SIMPLE: Specialized Model-Sample Matching for Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the Ensemble of Averages (RegNetY-16GF) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the PromptStyler (CLIP, ViT-B/16) model in the PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the MIRO (RegNetY-16GF, SWAD) model in the Domain Generalization by Mutual-Information Regularization with Pre-trained Models paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the SEDGE+ model in the Domain Generalization using Pretrained Models without Fine-tuning paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the Ensemble of Averages (ResNeXt-50 32x4d) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the SEDGE model in the Domain Generalization using Pretrained Models without Fine-tuning paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the CADG model in the CADG: A Model Based on Cross Attention for Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the GMoE-S/16 model in the Sparse Mixture-of-Experts are Domain Generalizable Learners paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the PromptStyler (CLIP, ResNet-50) model in the PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the Model Ratatouille model in the Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the Ensemble of Averages (ResNet-50) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the MIRO (ResNet-50, SWAD) model in the Domain Generalization by Mutual-Information Regularization with Pre-trained Models paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the DDG model in the Dynamic Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the PCL (swad+resnet50) model in the PCL: Proxy-Based Contrastive Learning for Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the VNE (ResNet-50, SWAD) model in the VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the SWAD (ResNet-50) model in the SWAD: Domain Generalization by Seeking Flat Minima paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the WAKD (DeiT-Ti) model in the Weight Averaging Improves Knowledge Distillation under Domain Shift paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the DREAME model in the Automated Domain Discovery from Multiple Sources to Improve Zero-Shot Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the AdaClust (ResNet-50, SWAD) model in the Adaptive Methods for Aggregated Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the Fishr (ResNet-50) model in the Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the POEM model in the POEM: Polarization of Embeddings for Domain-Invariant Representations paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the AdaClust (ResNet-50) model in the Adaptive Methods for Aggregated Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the XDED (ResNet-18) model in the Cross-Domain Ensemble Distillation for Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the XDED (ResNet-18) model in the Cross Domain Ensemble Distillation for Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the WAKD (Resnet-18) model in the Weight Averaging Improves Knowledge Distillation under Domain Shift paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the Jone et al. (ResNet-50) model in the Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the LRDG (ResNet-18) model in the Domain Generalization by Learning and Removing Domain-specific Features paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the RSC (ResNet18) model in the Self-Challenging Improves Cross-Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the DADG (ResNet-18) model in the Discriminative Adversarial Domain Generalization with Meta-learning based Cross-domain Validation paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the D-Triplet(RegNetY-16GF) model in the Domain-aware Triplet loss in Domain Generalization paper on the Office-Home dataset? | Average Accuracy, Average |
What metrics were used to measure the CAR-FT (CLIP, ViT-B/16) model in the Context-Aware Robust Fine-Tuning paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the PromptStyler (CLIP, ViT-B/16) model in the PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the SIMPLE+ model in the SIMPLE: Specialized Model-Sample Matching for Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the PromptStyler (CLIP, ViT-L/14) model in the PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the PromptStyler (CLIP, ResNet-50) model in the PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the SEDGE+ model in the Domain Generalization using Pretrained Models without Fine-tuning paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the CADG model in the CADG: A Model Based on Cross Attention for Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the MIRO (RegNetY-16GF, SWAD) model in the Domain Generalization by Mutual-Information Regularization with Pre-trained Models paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the Ensemble of Averages (RegNetY-16GF) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the Ensemble of Averages (ResNeXt-50 32x4d) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the GMoE-S/16 model in the Sparse Mixture-of-Experts are Domain Generalizable Learners paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the SIMPLE model in the SIMPLE: Specialized Model-Sample Matching for Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the SEDGE model in the Domain Generalization using Pretrained Models without Fine-tuning paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the VNE (ResNet-50, SWAD) model in the VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the AdaClust (ResNet-50, SWAD) model in the Adaptive Methods for Aggregated Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the MIRO (ResNet-50, SWAD) model in the Domain Generalization by Mutual-Information Regularization with Pre-trained Models paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the POEM model in the POEM: Polarization of Embeddings for Domain-Invariant Representations paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the SWAD (ResNet-50) model in the SWAD: Domain Generalization by Seeking Flat Minima paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the Ensemble of Averages (ResNet-50) model in the Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the DREAME model in the Automated Domain Discovery from Multiple Sources to Improve Zero-Shot Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the AdaClust (ResNet-50) model in the Adaptive Methods for Aggregated Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the DADG (ResNet-18) model in the Discriminative Adversarial Domain Generalization with Meta-learning based Cross-domain Validation paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the Fishr (ResNet-50) model in the Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the StableNet (ResNet-18) model in the Deep Stable Learning for Out-Of-Distribution Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the RSC (AlexNet) model in the Self-Challenging Improves Cross-Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the DADG (AlexNet) model in the Discriminative Adversarial Domain Generalization with Meta-learning based Cross-domain Validation paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the D-Triplet(RegNetY-16GF) model in the Domain-aware Triplet loss in Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the D-Triplet(Resnet-50) model in the Domain-aware Triplet loss in Domain Generalization paper on the VLCS dataset? | Average Accuracy, Average |
What metrics were used to measure the NAS-OoD model in the NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization paper on the NICO Animal dataset? | Accuracy |
What metrics were used to measure the DecAug (Resnet-18) model in the DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation paper on the NICO Animal dataset? | Accuracy |
What metrics were used to measure the JiGen (Resnet-18) model in the Domain Generalization by Solving Jigsaw Puzzles paper on the NICO Animal dataset? | Accuracy |
What metrics were used to measure the CORAL (Resnet-18) model in the Deep CORAL: Correlation Alignment for Deep Domain Adaptation paper on the NICO Animal dataset? | Accuracy |
What metrics were used to measure the DRO (Resnet-18) model in the Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization paper on the NICO Animal dataset? | Accuracy |
What metrics were used to measure the Model soups (BASIC-L) model in the Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time paper on the ImageNet-Sketch dataset? | Top-1 accuracy |
What metrics were used to measure the Model soups (ViT-G/14) model in the Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time paper on the ImageNet-Sketch dataset? | Top-1 accuracy |
What metrics were used to measure the CAR-FT (CLIP, ViT-L/14@336px) model in the Context-Aware Robust Fine-Tuning paper on the ImageNet-Sketch dataset? | Top-1 accuracy |
What metrics were used to measure the ConvNeXt-XL (Im21k, 384) model in the A ConvNet for the 2020s paper on the ImageNet-Sketch dataset? | Top-1 accuracy |
What metrics were used to measure the LLE (ViT-H/14, MAE, Edge Aug) model in the A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others paper on the ImageNet-Sketch dataset? | Top-1 accuracy |
What metrics were used to measure the MAE (ViT-H, 448) model in the Masked Autoencoders Are Scalable Vision Learners paper on the ImageNet-Sketch dataset? | Top-1 accuracy |
What metrics were used to measure the MAE+DAT (ViT-H) model in the Enhance the Visual Representation via Discrete Adversarial Training paper on the ImageNet-Sketch dataset? | Top-1 accuracy |
What metrics were used to measure the GPaCo (ViT-L) model in the Generalized Parametric Contrastive Learning paper on the ImageNet-Sketch dataset? | Top-1 accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.