prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the ReLA model in the GRES: Generalized Referring Expression Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT model in the VLT: Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the RefTR model in the Referring Transformer: A One-step Approach to Multi-task Visual Grounding paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CRIS model in the CRIS: CLIP-Driven Referring Image Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT model in the Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the SHNet model in the Comprehensive Multi-Modal Interactions for Referring Image Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CPMC model in the Referring Image Segmentation via Cross-Modal Progressive Comprehension paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the BRINet model in the Bi-Directional Relationship Inferring Network for Referring Image Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the STEP (1-fold) model in the See-Through-Text Grouping for Referring Image Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CMSA model in the Cross-Modal Self-Attention Network for Referring Image Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the RefVOS with BERT Pre-train model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the LANG2SEG model in the Referring Expression Object Segmentation with Caption-Aware Consistency paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MattNet model in the MAttNet: Modular Attention Network for Referring Expression Comprehension paper on the RefCOCO testB dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MUTR model in the Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation paper on the Referring Expressions for DAVIS 2016 & 2017 dataset?
F, J, J&F 1st frame
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCOg-val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCOg-val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the X-Decoder (Davit-d5) model in the Generalized Decoding for Pixel, Image, and Language paper on the RefCOCOg-val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT (Swin-B) model in the VLT: Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCOg-val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the LAVT model in the LAVT: Language-Aware Vision Transformer for Referring Image Segmentation paper on the RefCOCOg-val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT (Darknet53) model in the Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCOg-val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the SHNet model in the Comprehensive Multi-Modal Interactions for Referring Image Segmentation paper on the RefCOCOg-val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the IEP-Ref (700K prog.) model in the CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions paper on the CLEVR-Ref+ dataset?
IoU
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCOg-test dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCOg-test dataset?
Overall IoU, Mean IoU
What metrics were used to measure the LAVT (Swin-B) model in the LAVT: Language-Aware Vision Transformer for Referring Image Segmentation paper on the RefCOCOg-test dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT (Darknet53) model in the Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCOg-test dataset?
Overall IoU, Mean IoU
What metrics were used to measure the RefVos model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the A2Dre test dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the G-Ref val dataset?
Overall IoU
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the ReLA model in the GRES: Generalized Referring Expression Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT model in the VLT: Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CRIS model in the CRIS: CLIP-Driven Referring Image Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the LAVT model in the LAVT: Language-Aware Vision Transformer for Referring Image Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT model in the Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the SHNet model in the Comprehensive Multi-Modal Interactions for Referring Image Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CPMC model in the Referring Image Segmentation via Cross-Modal Progressive Comprehension paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the BRINet model in the Bi-Directional Relationship Inferring Network for Referring Image Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the STEP (5-fold) model in the See-Through-Text Grouping for Referring Image Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MattNet model in the MAttNet: Modular Attention Network for Referring Expression Comprehension paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the RefVOS with BERT + MLM loss model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CMSA model in the Cross-Modal Self-Attention Network for Referring Image Segmentation paper on the RefCOCO+ val dataset?
Overall IoU, Mean IoU
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the ReLA model in the GRES: Generalized Referring Expression Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT model in the VLT: Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the LAVT model in the LAVT: Language-Aware Vision Transformer for Referring Image Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CRIS model in the CRIS: CLIP-Driven Referring Image Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the VLT model in the Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the SHNet model in the Comprehensive Multi-Modal Interactions for Referring Image Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CPMC model in the Referring Image Segmentation via Cross-Modal Progressive Comprehension paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the BRINet model in the Bi-Directional Relationship Inferring Network for Referring Image Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the MattNet model in the MAttNet: Modular Attention Network for Referring Expression Comprehension paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the STEP (5-fold) model in the See-Through-Text Grouping for Referring Image Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the RefVOS with BERT + MLM Loss model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the CMSA model in the Cross-Modal Self-Attention Network for Referring Image Segmentation paper on the RefCOCO+ testA dataset?
Overall IoU, Mean IoU
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the ReLA model in the GRES: Generalized Referring Expression Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the VPD model in the Unleashing Text-to-Image Diffusion Models for Visual Perception paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the VLT model in the VLT: Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the RefTR model in the Referring Transformer: A One-step Approach to Multi-task Visual Grounding paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the CRIS model in the CRIS: CLIP-Driven Referring Image Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the VLT model in the Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the SHNet model in the Comprehensive Multi-Modal Interactions for Referring Image Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the CPMC model in the Referring Image Segmentation via Cross-Modal Progressive Comprehension paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the BRINet model in the Bi-Directional Relationship Inferring Network for Referring Image Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the RefVOS with BERT + MLM loss model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the LANG2SEG model in the Referring Expression Object Segmentation with Caption-Aware Consistency paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the RefVOS with BERT Pre-train model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the CMSA model in the Cross-Modal Self-Attention Network for Referring Image Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the STEP (1-fold) model in the See-Through-Text Grouping for Referring Image Segmentation paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the MattNet model in the MAttNet: Modular Attention Network for Referring Expression Comprehension paper on the RefCoCo val dataset?
Overall IoU, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9, Mean IoU
What metrics were used to measure the SgMg (Video-Swin-B) model in the Spectrum-guided Multi-granularity Referring Video Object Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the SOC (Video-Swin-B) model in the SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the ReferFormer (Video-Swin-B) model in the Language as Queries for Referring Video Object Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the SOC (Video-Swin-T) model in the SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the MANET model in the Multi-Attention Network for Compressed Video Referring Object Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the VLIDE model in the Deeply Interleaved Two-Stream Encoder for Referring Video Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Locater model in the Local-Global Context Aware Transformer for Language-Guided Video Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the MTTR (w=10) model in the End-to-End Referring Video Object Segmentation with Multimodal Transformers paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the MTTR (w=8) model in the End-to-End Referring Video Object Segmentation with Multimodal Transformers paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the mmmmtbvs model in the Modeling Motion with Multi-Modal Features for Text-Based Video Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the CMPC-V (I3D) model in the Cross-Modal Progressive Comprehension for Referring Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Hui et al. model in the Collaborative Spatial-Temporal Modeling for Language-Queried Video Actor Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the AAMN model in the Actor and Action Modular Network for Text-based Video Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the PRPE model in the Polar Relative Positional Encoding for Video-Language Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the CMPC-V (R2D) model in the Cross-Modal Progressive Comprehension for Referring Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the CMDy model in the Context Modulated Dynamic Networks for Actor and Action Video Segmentation with Language Queries paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the VT-Capsule model in the Visual-Textual Capsule Routing for Text-Based Video Segmentation paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the ACGA model in the Asymmetric Cross-Guided Attention Network for Actor and Action Video Segmentation From Natural Language Query paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Gavriluyk el al. (Optical flow) model in the Actor and Action Video Segmentation from a Sentence paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9
What metrics were used to measure the Gavriluyk el al. model in the Actor and Action Video Segmentation from a Sentence paper on the A2D Sentences dataset?
AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9