prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the ATL model in the Affordance Transfer Learning for Human-Object Interaction Detection paper on the HICO-DET dataset?
COCO-Val2017, Object365, HICO, Novel classes
What metrics were used to measure the VCL model in the Visual Compositional Learning for Human-Object Interaction Detection paper on the HICO-DET dataset?
COCO-Val2017, Object365, HICO, Novel classes
What metrics were used to measure the FCL model in the Detecting Human-Object Interaction via Fabricated Compositional Learning paper on the HICO-DET dataset?
COCO-Val2017, Object365, HICO, Novel classes
What metrics were used to measure the SCL model in the Discovering Human-Object Interaction Concepts via Self-Compositional Learning paper on the HICO-DET dataset?
Unknown (AP)
What metrics were used to measure the Qpic model in the QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information paper on the HICO-DET dataset?
Unknown (AP)
What metrics were used to measure the Affordance Transfer model in the Affordance Transfer Learning for Human-Object Interaction Detection paper on the HICO-DET dataset?
Unknown (AP)
What metrics were used to measure the Deep RNN model in the Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks paper on the MIMIC-III dataset?
MAE for SBP [mmHg], MAE for DBP [mmHg]
What metrics were used to measure the ResNet (raw PPG + PPG’ + PPG”, with personalization) model in the Blood Pressure Estimation from Photoplethysmogram Using a Spectro-Temporal Deep Neural Network paper on the MIMIC-III dataset?
MAE for SBP [mmHg], MAE for DBP [mmHg]
What metrics were used to measure the Random Forest (features, with personalization) model in the paper on the MIMIC-III dataset?
MAE for SBP [mmHg], MAE for DBP [mmHg]
What metrics were used to measure the Deep RNN model in the Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks paper on the Multi-day Continuous BP Prediction dataset?
RMSE
What metrics were used to measure the Segmenter ViT-S/16 model in the Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation paper on the Nighttime Driving dataset?
mIoU
What metrics were used to measure the CAUSE (DINOv2, ViT-B/14) model in the Causal Unsupervised Semantic Segmentation paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the CAUSE (ViT-B/8) model in the Causal Unsupervised Semantic Segmentation paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the STEGO (ViT-B/8) model in the Unsupervised Semantic Segmentation by Distilling Feature Correspondences paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the SmooSeg (DINO, ViT-S/8) model in the SmooSeg: Smoothness Prior for Unsupervised Semantic Segmentation paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the HP (ViT-S/8) model in the Leveraging Hidden Positives for Unsupervised Semantic Segmentation paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the STEGO (ViT-S/8) model in the Unsupervised Semantic Segmentation by Distilling Feature Correspondences paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the CrOC (ViT-S/16, COCO+) model in the CrOC: Cross-View Online Clustering for Dense Visual Representation Learning paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the ViCE model in the ViCE: Improving Dense Representation Learning by Superpixelization and Contrasting Cluster Assignment paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the FS4 model in the Fully Self-Supervised Learning for Semantic Segmentation paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the PiCIE + H model in the PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the SGSeg model in the Unsupervised Image Semantic Segmentation through Superpixels and Graph Neural Networks paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the SAN model in the Rethinking Alignment and Uniformity in Unsupervised Image Semantic Segmentation paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the PiCIE model in the PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the Ours (SlotCon) model in the Self-Supervised Visual Representation Learning with Semantic Grouping paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the IIC model in the Invariant Information Clustering for Unsupervised Image Classification and Segmentation paper on the COCO-Stuff-27 dataset?
mIoU, Accuracy
What metrics were used to measure the PASS model in the Large-scale Unsupervised Semantic Segmentation paper on the ImageNet-S-300 dataset?
mIoU (test), mIoU (val)
What metrics were used to measure the CAUSE-TR (ViT-S/8) model in the Causal Unsupervised Semantic Segmentation paper on the COCO-Stuff-171 dataset?
mIoU, Pixel Accuracy
What metrics were used to measure the TransFGU (ViT-S/8) model in the TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation paper on the COCO-Stuff-171 dataset?
mIoU, Pixel Accuracy
What metrics were used to measure the PiCIE (ResNet-50) model in the PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering paper on the COCO-Stuff-171 dataset?
mIoU, Pixel Accuracy
What metrics were used to measure the IIC (ResNet-50) model in the Invariant Information Clustering for Unsupervised Image Classification and Segmentation paper on the COCO-Stuff-171 dataset?
mIoU, Pixel Accuracy
What metrics were used to measure the SAN model in the Rethinking Alignment and Uniformity in Unsupervised Image Semantic Segmentation paper on the COCO-Stuff-3 dataset?
Pixel Accuracy
What metrics were used to measure the SGSeg model in the Unsupervised Image Semantic Segmentation through Superpixels and Graph Neural Networks paper on the COCO-Stuff-3 dataset?
Pixel Accuracy
What metrics were used to measure the InfoSeg model in the InfoSeg: Unsupervised Semantic Image Segmentation with Mutual Information Maximization paper on the COCO-Stuff-3 dataset?
Pixel Accuracy
What metrics were used to measure the InMARS model in the Unsupervised Image Segmentation by Mutual Information Maximization and Adversarial Regularization paper on the COCO-Stuff-3 dataset?
Pixel Accuracy
What metrics were used to measure the AC model in the Autoregressive Unsupervised Image Segmentation paper on the COCO-Stuff-3 dataset?
Pixel Accuracy
What metrics were used to measure the IIC model in the Invariant Information Clustering for Unsupervised Image Classification and Segmentation paper on the COCO-Stuff-3 dataset?
Pixel Accuracy
What metrics were used to measure the CAUSE (DINOv2, ViT-B/14) model in the Causal Unsupervised Semantic Segmentation paper on the Cityscapes test dataset?
mIoU, Accuracy
What metrics were used to measure the CAUSE (ViT-B/8) model in the Causal Unsupervised Semantic Segmentation paper on the Cityscapes test dataset?
mIoU, Accuracy
What metrics were used to measure the ViCE model in the ViCE: Improving Dense Representation Learning by Superpixelization and Contrasting Cluster Assignment paper on the Cityscapes test dataset?
mIoU, Accuracy
What metrics were used to measure the STEGO model in the Unsupervised Semantic Segmentation by Distilling Feature Correspondences paper on the Cityscapes test dataset?
mIoU, Accuracy
What metrics were used to measure the HP model in the Leveraging Hidden Positives for Unsupervised Semantic Segmentation paper on the Cityscapes test dataset?
mIoU, Accuracy
What metrics were used to measure the PiCIE model in the PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering paper on the Cityscapes test dataset?
mIoU, Accuracy
What metrics were used to measure the MDC model in the Deep Clustering for Unsupervised Learning of Visual Features paper on the Cityscapes test dataset?
mIoU, Accuracy
What metrics were used to measure the InfoSeg model in the InfoSeg: Unsupervised Semantic Image Segmentation with Mutual Information Maximization paper on the COCO-Persons dataset?
Pixel Accuracy
What metrics were used to measure the PASS (+Saliency map) model in the Large-scale Unsupervised Semantic Segmentation paper on the ImageNet-S-50 dataset?
mIoU (test), mIoU (val)
What metrics were used to measure the PASS model in the Large-scale Unsupervised Semantic Segmentation paper on the ImageNet-S-50 dataset?
mIoU (test), mIoU (val)
What metrics were used to measure the MaskContrast (+Saliency map) model in the Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals paper on the ImageNet-S-50 dataset?
mIoU (test), mIoU (val)
What metrics were used to measure the PiCIE (Supervised pretrain) model in the PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering paper on the ImageNet-S-50 dataset?
mIoU (test), mIoU (val)
What metrics were used to measure the MDC (Supervised pretrain) model in the Deep Clustering for Unsupervised Learning of Visual Features paper on the ImageNet-S-50 dataset?
mIoU (test), mIoU (val)
What metrics were used to measure the HP model in the Leveraging Hidden Positives for Unsupervised Semantic Segmentation paper on the Potsdam-3 dataset?
Accuracy, Pixel Accuracy
What metrics were used to measure the STEGO model in the Unsupervised Semantic Segmentation by Distilling Feature Correspondences paper on the Potsdam-3 dataset?
Accuracy, Pixel Accuracy
What metrics were used to measure the IIC model in the Invariant Information Clustering for Unsupervised Image Classification and Segmentation paper on the Potsdam-3 dataset?
Accuracy, Pixel Accuracy
What metrics were used to measure the InfoSeg model in the InfoSeg: Unsupervised Semantic Image Segmentation with Mutual Information Maximization paper on the Potsdam-3 dataset?
Accuracy, Pixel Accuracy
What metrics were used to measure the Segmenter ViT-S/16 model in the Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation paper on the ACDC (Adverse Conditions Dataset with Correspondences) dataset?
mIoU
What metrics were used to measure the InfoSeg model in the InfoSeg: Unsupervised Semantic Image Segmentation with Mutual Information Maximization paper on the COCO-Stuff-15 dataset?
Pixel Accuracy
What metrics were used to measure the InMARS model in the Unsupervised Image Segmentation by Mutual Information Maximization and Adversarial Regularization paper on the COCO-Stuff-15 dataset?
Pixel Accuracy
What metrics were used to measure the IIC model in the Invariant Information Clustering for Unsupervised Image Classification and Segmentation paper on the COCO-Stuff-15 dataset?
Pixel Accuracy
What metrics were used to measure the DenseSiam model in the Dense Siamese Network for Dense Unsupervised Learning paper on the COCO-All dataset?
mIoU
What metrics were used to measure the CAUSE (iBOT, ViT-B/16) model in the Causal Unsupervised Semantic Segmentation paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the CAUSE (ViT-B/8) model in the Causal Unsupervised Semantic Segmentation paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the CAUSE (DINOv2, ViT-B/14) model in the Causal Unsupervised Semantic Segmentation paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the MaskDistill+CRF model in the Discovering Object Masks with Transformers for Unsupervised Semantic Segmentation paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the Leopart (ViT-B/8) model in the Self-Supervised Learning of Object Parts for Semantic Segmentation paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the MaskDistill model in the Discovering Object Masks with Transformers for Unsupervised Semantic Segmentation paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the MaskContrast (Saliency) model in the Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the Leopart (ViT-S/16) model in the Self-Supervised Learning of Object Parts for Semantic Segmentation paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the MaskContrast model in the Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the SegSort (Edges) model in the SegSort: Segmentation by Discriminative Sorting of Segments paper on the PASCAL VOC 2012 val dataset?
Clustering [mIoU], Linear Classifier [mIoU], FCN [mIoU]
What metrics were used to measure the Segmenter ViT-S/16 model in the Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation paper on the Cityscapes val dataset?
mIoU
What metrics were used to measure the CAUSE-TR (ViT-S/8) model in the Causal Unsupervised Semantic Segmentation paper on the COCO-Stuff-81 dataset?
mIoU, Pixel Accuracy
What metrics were used to measure the CAUSE-MLP (ViT-S/8) model in the Causal Unsupervised Semantic Segmentation paper on the COCO-Stuff-81 dataset?
mIoU, Pixel Accuracy
What metrics were used to measure the TransFGU (ViT-S/8) model in the TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic Segmentation paper on the COCO-Stuff-81 dataset?
mIoU, Pixel Accuracy
What metrics were used to measure the MaskContrast (ResNet-50) model in the Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals paper on the COCO-Stuff-81 dataset?
mIoU, Pixel Accuracy
What metrics were used to measure the Segmenter ViT-S/16 model in the Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation paper on the Dark Zurich dataset?
mIoU
What metrics were used to measure the PASS model in the Large-scale Unsupervised Semantic Segmentation paper on the ImageNet-S dataset?
mIoU (test), mIoU (val)
What metrics were used to measure the CodeBERT(MLM) model in the CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation paper on the CodeXGLUE - CT-all dataset?
Go, JS, Java, PHP, Python, Ruby
What metrics were used to measure the CodeBERT(MLM) model in the CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation paper on the CodeXGLUE - CT-maxmin dataset?
Go, JS, Java, PHP, Python, Ruby
What metrics were used to measure the DDN model in the A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning paper on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset?
Average Score, Median Win Rate
What metrics were used to measure the DMIX model in the A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning paper on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset?
Average Score, Median Win Rate
What metrics were used to measure the DPLEX model in the A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning paper on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset?
Average Score, Median Win Rate
What metrics were used to measure the QPLEX model in the A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning paper on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset?
Average Score, Median Win Rate
What metrics were used to measure the QMIX model in the A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning paper on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset?
Average Score, Median Win Rate
What metrics were used to measure the VDN model in the A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning paper on the SMAC MMM2_7m2M1M_vs_8m4M1M dataset?
Average Score, Median Win Rate
What metrics were used to measure the ACE model in the ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the DDN model in the DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the DMIX model in the DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the DPLEX model in the A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the QMIX model in the DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the QMIX model in the The StarCraft Multi-Agent Challenge paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the QMIX model in the Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the VDN model in the DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the DIQL model in the DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the IQL model in the DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the IQL model in the The StarCraft Multi-Agent Challenge paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the VDN model in the The StarCraft Multi-Agent Challenge paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the Heuristic model in the The StarCraft Multi-Agent Challenge paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the QPLEX model in the A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning paper on the SMAC 6h_vs_8z dataset?
Median Win Rate, Average Score
What metrics were used to measure the ACE model in the ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency paper on the SMAC MMM2 dataset?
Median Win Rate, Average Score
What metrics were used to measure the DDN model in the DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning paper on the SMAC MMM2 dataset?
Median Win Rate, Average Score