prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the RPC model in the Benchmarking and Analyzing Point Cloud Classification under Corruptions paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the GDANet model in the Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PCT model in the PCT: Point cloud transformer paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the CurveNet model in the Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the DGCNN model in the Dynamic Graph CNN for Learning on Point Clouds paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PointMixUp (PointNet++) model in the PointMixup: Augmentation for Point Clouds paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the SimpleView model in the Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the OcCo-DGCNN model in the Unsupervised Point Cloud Pre-Training via Occlusion Completion paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PAConv model in the PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the RSCNN model in the Relation-Shape Convolutional Neural Network for Point Cloud Analysis paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the WOLFMix (PointNet) model in the Benchmarking and Analyzing Point Cloud Classification under Corruptions paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PointNet model in the PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the SAMFusion model in the Improving existing segmentators performance with zero-shot segmentators paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the EVPv2 model in the Explicit Visual Prompting for Universal Foreground Segmentations paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the EVPv1 model in the Explicit Visual Prompting for Low-Level Structure Segmentations paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the SINet-V2 model in the Concealed Object Detection paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the MirrorNet-ResNeXt152 model in the MirrorNet: Bio-Inspired Camouflaged Object Segmentation paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the PraNet model in the PraNet: Parallel Reverse Attention Network for Polyp Segmentation paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the SINet* model in the Camouflaged Object Detection paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the EGNet model in the EGNet: Edge Guidance Network for Salient Object Detection paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the BASNet model in the BASNet: Boundary-Aware Salient Object Detection paper on the CAMO dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the EVPv2 model in the Explicit Visual Prompting for Universal Foreground Segmentations paper on the COD dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the EVPv1 model in the Explicit Visual Prompting for Low-Level Structure Segmentations paper on the COD dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the SINet* model in the Concealed Object Detection paper on the COD dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the EGNet model in the EGNet: Edge Guidance Network for Salient Object Detection paper on the COD dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the CPD model in the Cascaded Partial Decoder for Fast and Accurate Salient Object Detection paper on the COD dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the SINet model in the Camouflaged Object Detection paper on the COD dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the BASNet model in the BASNet: Boundary-Aware Salient Object Detection paper on the COD dataset?
E-Measure, MAE, S-Measure, Weighted F-Measure
What metrics were used to measure the Panoptic-PartFormer model in the Panoptic-PartFormer: Learning a Unified Model for Panoptic Part Segmentation paper on the Pascal Panoptic Parts dataset?
PartPQ
What metrics were used to measure the JPPF model in the Multi-task Fusion for Efficient Panoptic-Part Segmentation paper on the Pascal Panoptic Parts dataset?
PartPQ
What metrics were used to measure the Panoptic-PartFormer model in the Panoptic-PartFormer: Learning a Unified Model for Panoptic Part Segmentation paper on the Cityscapes Panoptic Parts dataset?
PartPQ
What metrics were used to measure the JPPF model in the Multi-task Fusion for Efficient Panoptic-Part Segmentation paper on the Cityscapes Panoptic Parts dataset?
PartPQ
What metrics were used to measure the Pr-VIPE model in the View-Invariant Probabilistic Embedding for Human Pose paper on the MPI-INF-3DHP dataset?
Hit@1, Hit@10
What metrics were used to measure the Pr-VIPE model in the View-Invariant Probabilistic Embedding for Human Pose paper on the Human3.6M dataset?
Hit@1, Hit@10
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the MUTR model in the Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the UniRef-L (Swin-L) model in the Segment Every Reference Object in Spatial and Temporal Spaces paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the SOC (Joint training, Video-Swin-B) model in the SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the DEVA (ReferFormer) model in the Tracking Anything with Decoupled Video Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the SgMg (Pre-training, Video-Swin-B) model in the Spectrum-guided Multi-granularity Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the EPCFormer (ViT-H) model in the EPCFormer: Expression Prompt Collaboration Transformer for Universal Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the LoSh-R model in the LoSh: Long-Short Text Joint Prediction Network for Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the VLT model in the VLT: Vision-Language Transformer and Query Generation for Referring Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the OnlineRefer (Swin-L, online) model in the OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the R2VOS (Video-Swin-T) model in the Towards Robust Referring Video Object Segmentation with Cyclic Relational Consensus paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the SOC (Video-Swin-T) model in the SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the ReferFormer (ResNet-101) model in the Language as Queries for Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the MANET model in the Multi-Attention Network for Compressed Video Referring Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the ReferFormer (ResNet-50) model in the Language as Queries for Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the MTTR (w=12) model in the End-to-End Referring Video Object Segmentation with Multimodal Transformers paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the Locater model in the Local-Global Context Aware Transformer for Language-Guided Video Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the MLRLSA model in the Multi-Level Representation Learning With Semantic Alignment for Referring Video Object Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the VLIDE model in the Deeply Interleaved Two-Stream Encoder for Referring Video Segmentation paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the URVOS model in the URVOS: Unified Referring Video Object Segmentation Network with a Large-Scale Benchmark paper on the Refer-YouTube-VOS (2021 public validation) dataset?
J&F, J, F
What metrics were used to measure the SgMg (Video-Swin-B) model in the Spectrum-guided Multi-granularity Referring Video Object Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the SOC (Video-Swin-B) model in the SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the VLIDE model in the Deeply Interleaved Two-Stream Encoder for Referring Video Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the SOC (Video-Swin-T) model in the SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the MTTR (w=10) model in the End-to-End Referring Video Object Segmentation with Multimodal Transformers paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the MTTR (w=8) model in the End-to-End Referring Video Object Segmentation with Multimodal Transformers paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the CMPC-V model in the Cross-Modal Progressive Comprehension for Referring Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Hui et al. model in the Collaborative Spatial-Temporal Modeling for Language-Queried Video Actor Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the AAMN model in the Actor and Action Modular Network for Text-based Video Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the CMDy model in the Context Modulated Dynamic Networks for Actor and Action Video Segmentation with Language Queries paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the PRPE model in the Polar Relative Positional Encoding for Video-Language Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the ACGA model in the Asymmetric Cross-Guided Attention Network for Actor and Action Video Segmentation From Natural Language Query paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Gavrilyuk et al. (Optical flow) model in the Actor and Action Video Segmentation from a Sentence paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the VT-Capsule model in the Visual-Textual Capsule Routing for Text-Based Video Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Gavrilyuk et al. model in the Actor and Action Video Segmentation from a Sentence paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Hu et al. model in the Segmentation from Natural Language Expressions paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Li et al. model in the Tracking by Natural Language Specification paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the HINet model in the Hierarchical interaction network for video object segmentation from referring expressions paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the ClawCraneNet model in the ClawCraneNet: Leveraging Object-level Relation for Text-based Video Segmentation paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the CMSA+CFSA model in the Referring Segmentation in Images and Videos with Cross-Modal Self-Attention Network paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the RefVOS model in the Hierarchical interaction network for video object segmentation from referring expressions paper on the J-HMDB dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the RefVOS-Human REs model in the SynthRef: Generation of Synthetic Referring Expressions for Object Segmentation paper on the Refer-YouTube-VOS dataset?
Mean IoU, Precision@0.5, Precision@0.9
What metrics were used to measure the RefVOS-Synthetic REs model in the SynthRef: Generation of Synthetic Referring Expressions for Object Segmentation paper on the Refer-YouTube-VOS dataset?
Mean IoU, Precision@0.5, Precision@0.9
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the ReLA model in the GRES: Generalized Referring Expression Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT model in the VLT: Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the LAVT model in the LAVT: Language-Aware Vision Transformer for Referring Image Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CRIS model in the CRIS: CLIP-Driven Referring Image Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT model in the Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the SHNet model in the Comprehensive Multi-Modal Interactions for Referring Image Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CPMC model in the Referring Image Segmentation via Cross-Modal Progressive Comprehension paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the BRINet model in the Bi-Directional Relationship Inferring Network for Referring Image Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the STEP (5-fold) model in the See-Through-Text Grouping for Referring Image Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MattNet model in the MAttNet: Modular Attention Network for Referring Expression Comprehension paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CMSA model in the Cross-Modal Self-Attention Network for Referring Image Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the RefVOS with BERT + MLM loss model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the RefCOCO+ test B dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the ReferIt dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the ReferIt dataset?
Overall IoU, Mean IoU
What metrics were used to measure the SHNet model in the Comprehensive Multi-Modal Interactions for Referring Image Segmentation paper on the ReferIt dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the G-Ref test B dataset?
Overall IoU
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU