prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Li et al. model in the Tracking by Natural Language Specification paper on the A2D Sentences dataset? | AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9 |
What metrics were used to measure the Hu et al. model in the Segmentation from Natural Language Expressions paper on the A2D Sentences dataset? | AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9 |
What metrics were used to measure the HINet model in the Hierarchical interaction network for video object segmentation from referring expressions paper on the A2D Sentences dataset? | AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9 |
What metrics were used to measure the RefVOS model in the Hierarchical interaction network for video object segmentation from referring expressions paper on the A2D Sentences dataset? | AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9 |
What metrics were used to measure the ClawCraneNet model in the ClawCraneNet: Leveraging Object-level Relation for Text-based Video Segmentation paper on the A2D Sentences dataset? | AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9 |
What metrics were used to measure the CMSA+CFSA model in the Referring Segmentation in Images and Videos with Cross-Modal Self-Attention Network paper on the A2D Sentences dataset? | AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9 |
What metrics were used to measure the RefVOS model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the A2D Sentences dataset? | AP, IoU overall, IoU mean, Precision@0.5, Precision@0.6, Precision@0.7, Precision@0.8, Precision@0.9 |
What metrics were used to measure the GLIPv2 model in the GLIPv2: Unifying Localization and Vision-Language Understanding paper on the PhraseCut dataset? | Mean IoU, Pr@0.5, Pr@0.7, Pr@0.9 |
What metrics were used to measure the MDETR ENB3 model in the MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding paper on the PhraseCut dataset? | Mean IoU, Pr@0.5, Pr@0.7, Pr@0.9 |
What metrics were used to measure the HULANet model in the PhraseCut: Language-based Image Segmentation in the Wild paper on the PhraseCut dataset? | Mean IoU, Pr@0.5, Pr@0.7, Pr@0.9 |
What metrics were used to measure the RMI model in the PhraseCut: Language-based Image Segmentation in the Wild paper on the PhraseCut dataset? | Mean IoU, Pr@0.5, Pr@0.7, Pr@0.9 |
What metrics were used to measure the MattNet model in the PhraseCut: Language-based Image Segmentation in the Wild paper on the PhraseCut dataset? | Mean IoU, Pr@0.5, Pr@0.7, Pr@0.9 |
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the DEVA (ReferFormer) model in the Tracking Anything with Decoupled Video Segmentation paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the SgMg model in the Spectrum-guided Multi-granularity Referring Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the PolyFormer model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the ReferFormer model in the Language as Queries for Referring Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the URVOS + Refer-Youtube-VOS + ft. DAVIS model in the URVOS: Unified Referring Video Object Segmentation Network with a Large-Scale Benchmark paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the HINet model in the Hierarchical interaction network for video object segmentation from referring expressions paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the URVOS + Refer-Youtube-VOS model in the URVOS: Unified Referring Video Object Segmentation Network with a Large-Scale Benchmark paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the RefVOS + SynthRef-YouTube-VIS model in the SynthRef: Generation of Synthetic Referring Expressions for Object Segmentation paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the RefVOS model in the SynthRef: Generation of Synthetic Referring Expressions for Object Segmentation paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the RefVOS model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the URVOS model in the URVOS: Unified Referring Video Object Segmentation Network with a Large-Scale Benchmark paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the Khoreva et al. model in the Video Object Segmentation with Language Referring Expressions paper on the DAVIS 2017 (val) dataset? | J&F 1st frame, J&F Full video |
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the G-Ref test A dataset? | Overall IoU |
What metrics were used to measure the UNINEXT-H model in the Universal Instance Perception as Object Discovery and Retrieval paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the RISCLIP-L model in the RISCLIP: Referring Image Segmentation Framework using CLIP paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the PolyFormer-L model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the PolyFormer-B model in the PolyFormer: Referring Image Segmentation as Sequential Polygon Generation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the VLT model in the VLT: Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the ReLA model in the GRES: Generalized Referring Expression Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the RefTR model in the Referring Transformer: A One-step Approach to Multi-task Visual Grounding paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the CRIS model in the CRIS: CLIP-Driven Referring Image Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the MaIL model in the MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the SHNet model in the Comprehensive Multi-Modal Interactions for Referring Image Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the VLT model in the Vision-Language Transformer and Query Generation for Referring Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the CPMC model in the Referring Image Segmentation via Cross-Modal Progressive Comprehension paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the BRINet model in the Bi-Directional Relationship Inferring Network for Referring Image Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the RefVOS with BERT Pre-train model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the MattNet model in the MAttNet: Modular Attention Network for Referring Expression Comprehension paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the LANG2SEG model in the Referring Expression Object Segmentation with Caption-Aware Consistency paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the CMSA model in the Cross-Modal Self-Attention Network for Referring Image Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the STEP (1-fold) model in the See-Through-Text Grouping for Referring Image Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the RefVos with Bi-LSTM model in the RefVOS: A Closer Look at Referring Expressions for Video Object Segmentation paper on the RefCOCO testA dataset? | Overall IoU, Mean IoU |
What metrics were used to measure the Weighted Box Fusion (WBF) model in the Ensemble Fusion for Small Object Detection paper on the SOD4SB Private Test dataset? | AP50 |
What metrics were used to measure the GFL + Test Time Augmentation model in the BandRe: Rethinking Band-Pass Filters for Scale-Wise Object Detection Evaluation paper on the SOD4SB Private Test dataset? | AP50 |
What metrics were used to measure the DL method (YOLOv8 + Ensamble) model in the MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results paper on the SOD4SB Private Test dataset? | AP50 |
What metrics were used to measure the Swin Transformer + Hierarchical design model in the Small Object Detection for Birds with Swin Transformer paper on the SOD4SB Private Test dataset? | AP50 |
What metrics were used to measure the E2 method (Normalized Gaussian Wasserstein Distance + Switch Hard Augmentation + Multi scale train + Weight Moving Average + CenterNet + VarifocalNet) model in the MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results paper on the SOD4SB Private Test dataset? | AP50 |
What metrics were used to measure the CFINet model in the Small Object Detection via Coarse-to-fine Proposal Generation and Imitation Learning paper on the SODA-D dataset? | mAP@0.5:0.95 |
What metrics were used to measure the BeeDetector model in the A Method for Detection of Small Moving Objects in UAV Videos paper on the Bee4Exp Honeybee Detection dataset? | Average F1 |
What metrics were used to measure the Weighted Box Fusion (WBF) model in the Ensemble Fusion for Small Object Detection paper on the SOD4SB Public Test dataset? | AP50 |
What metrics were used to measure the GFL + Test Time Augmentation model in the BandRe: Rethinking Band-Pass Filters for Scale-Wise Object Detection Evaluation paper on the SOD4SB Public Test dataset? | AP50 |
What metrics were used to measure the DL method (YOLOv8 + Ensamble) model in the MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results paper on the SOD4SB Public Test dataset? | AP50 |
What metrics were used to measure the Swin Transformer + Hierarchical design model in the Small Object Detection for Birds with Swin Transformer paper on the SOD4SB Public Test dataset? | AP50 |
What metrics were used to measure the E2 method (Normalized Gaussian Wasserstein Distance + Switch Hard Augmentation + Multi scale train + Weight Moving Average + CenterNet + VarifocalNet) model in the MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset, Methods, and Results paper on the SOD4SB Public Test dataset? | AP50 |
What metrics were used to measure the EdgeMAC + whitening model in the Deep Shape Matching paper on the Chairs dataset? | R@1, R@10 |
What metrics were used to measure the Chairs net + CFF + HOLEF model in the Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval paper on the Chairs dataset? | R@1, R@10 |
What metrics were used to measure the Chairs net + model in the Sketch Me That Shoe paper on the Chairs dataset? | R@1, R@10 |
What metrics were used to measure the Shoes net + model in the Sketch Me That Shoe paper on the Chairs dataset? | R@1, R@10 |
What metrics were used to measure the CCA-3V-HOG + PCA model in the Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval paper on the Chairs dataset? | R@1, R@10 |
What metrics were used to measure the Dense-HOG + rankSVM model in the Sketch Me That Shoe paper on the Chairs dataset? | R@1, R@10 |
What metrics were used to measure the Sketch-a-Net + rankSVM model in the Sketch Me That Shoe paper on the Chairs dataset? | R@1, R@10 |
What metrics were used to measure the BoW-HOG + rankSVM model in the Sketch Me That Shoe paper on the Chairs dataset? | R@1, R@10 |
What metrics were used to measure the EdgeMAC + whitening model in the Deep Shape Matching paper on the Handbags dataset? | R@1, R@10 |
What metrics were used to measure the Handbags net + CFF + HOLEF model in the Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval paper on the Handbags dataset? | R@1, R@10 |
What metrics were used to measure the Handbags net model in the Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval paper on the Handbags dataset? | R@1, R@10 |
What metrics were used to measure the Chairs net + model in the Sketch Me That Shoe paper on the Handbags dataset? | R@1, R@10 |
What metrics were used to measure the Shoes net + model in the Sketch Me That Shoe paper on the Handbags dataset? | R@1, R@10 |
What metrics were used to measure the Dense-HOG + rankSVM model in the Sketch Me That Shoe paper on the Handbags dataset? | R@1, R@10 |
What metrics were used to measure the Sketch-a-Net + rankSVM model in the Sketch Me That Shoe paper on the Handbags dataset? | R@1, R@10 |
What metrics were used to measure the BoW-HOG + rankSVM model in the Sketch Me That Shoe paper on the Handbags dataset? | R@1, R@10 |
What metrics were used to measure the EdgeMAC + whitening model in the Deep Shape Matching paper on the Shoes dataset? | R@1, R@10 |
What metrics were used to measure the BUCTD-CoAM-W48 (DLCRNet) model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the TriMouse-161 dataset? | mAP |
What metrics were used to measure the DLCRNet model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the TriMouse-161 dataset? | mAP |
What metrics were used to measure the ResNet50_s4graph11 model in the Multi-animal pose estimation, identification and tracking with DeepLabCut paper on the TriMouse-161 dataset? | mAP |
What metrics were used to measure the DLCRNet_ms4graph11 model in the Multi-animal pose estimation, identification and tracking with DeepLabCut paper on the TriMouse-161 dataset? | mAP |
What metrics were used to measure the CID-W32 model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the TriMouse-161 dataset? | mAP |
What metrics were used to measure the HRNet-W48 + Faster R-CNN model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the Fish-100 dataset? | mAP |
What metrics were used to measure the BUCTD-preNet-W48 (DLCRNet) model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the Fish-100 dataset? | mAP |
What metrics were used to measure the BUCTD-preNet-W48 (CID-W32) model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the Fish-100 dataset? | mAP |
What metrics were used to measure the DLCRNet_ms4graph9 model in the Multi-animal pose estimation, identification and tracking with DeepLabCut paper on the Fish-100 dataset? | mAP |
What metrics were used to measure the BUCTD-preNet-W48 (CID-W32) model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the Marmoset-8K dataset? | mAP |
What metrics were used to measure the CID-W32 model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the Marmoset-8K dataset? | mAP |
What metrics were used to measure the BUCTD-CoAM-W48 (DLCRNet) model in the Rethinking pose estimation in crowds: overcoming the detection information-bottleneck and ambiguity paper on the Marmoset-8K dataset? | mAP |
What metrics were used to measure the DLCRNet_ms-graph34 model in the Multi-animal pose estimation, identification and tracking with DeepLabCut paper on the Marmoset-8K dataset? | mAP |
What metrics were used to measure the SuperAnimal-AnimalTokenPose model in the SuperAnimal pretrained pose estimation models for behavioral analysis paper on the Animal-Pose Dataset dataset? | AP |
What metrics were used to measure the 8 Stacked Hourglass Network model in the SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation paper on the StanfordExtra dataset? | PCK@0.1 |
What metrics were used to measure the 2 Stacked Hourglass Network model in the SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation paper on the StanfordExtra dataset? | PCK@0.1 |
What metrics were used to measure the Mask R-CNN model in the SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation paper on the StanfordExtra dataset? | PCK@0.1 |
What metrics were used to measure the DeepLabCut-EfficientNet-B6 model in the Pretraining boosts out-of-domain robustness for pose estimation paper on the Horse-10 dataset? | PCK@0.3 (OOD), Normalized Error (OOD) |
What metrics were used to measure the DeepLabCut-EfficientNet-B4 model in the Pretraining boosts out-of-domain robustness for pose estimation paper on the Horse-10 dataset? | PCK@0.3 (OOD), Normalized Error (OOD) |
What metrics were used to measure the DeepLabCut-RESNET-101 model in the Pretraining boosts out-of-domain robustness for pose estimation paper on the Horse-10 dataset? | PCK@0.3 (OOD), Normalized Error (OOD) |
What metrics were used to measure the DeepLabCut-RESNET 50 model in the Pretraining boosts out-of-domain robustness for pose estimation paper on the Horse-10 dataset? | PCK@0.3 (OOD), Normalized Error (OOD) |
What metrics were used to measure the DeepLabCut-MOBILENETV2-1 model in the Pretraining boosts out-of-domain robustness for pose estimation paper on the Horse-10 dataset? | PCK@0.3 (OOD), Normalized Error (OOD) |
What metrics were used to measure the DeepLabCut-MOBILENETV2 0.35 model in the Pretraining boosts out-of-domain robustness for pose estimation paper on the Horse-10 dataset? | PCK@0.3 (OOD), Normalized Error (OOD) |
What metrics were used to measure the SuperAnimal-Quadruped HRNet-w32 model in the SuperAnimal pretrained pose estimation models for behavioral analysis paper on the Horse-10 dataset? | PCK@0.3 (OOD), Normalized Error (OOD) |
What metrics were used to measure the mmpose HRNet-w32 (w/ImageNet pretrained weights) model in the SuperAnimal pretrained pose estimation models for behavioral analysis paper on the Horse-10 dataset? | PCK@0.3 (OOD), Normalized Error (OOD) |
What metrics were used to measure the ViTPose+-H model in the ViTPose+: Vision Transformer Foundation Model for Generic Body Pose Estimation paper on the AP-10K dataset? | AP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.