prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Mask2Former (Swin-L) model in the Mask2Former for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the SeqFormer (Swin-L) model in the SeqFormer: Sequential Transformer for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the DeVIS (Swin-L) model in the DeVIS: Making Deformable Transformers Work for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the InstanceFormer(Swin-L) model in the InstanceFormer: An Online Video Instance Segmentation Framework paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the TCIS (Swin-S) model in the 1st Place Solution for YouTubeVOS Challenge 2021:Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the Video K-Net (Swin-Base) model in the Video K-Net: A Simple, Strong, and Unified Baseline for Video Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the NOVIS (ResNet-50) model in the NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the IDOL (ResNet-50) model in the In Defense of Online Models for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the Mask2Former (ResNet-101) model in the Mask2Former for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the SeqFormer (ResNet-101) model in the SeqFormer: Sequential Transformer for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the MSN model in the MSN: Efficient Online Mask Selection Network for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the SeqFormer (ResNet-50) model in the SeqFormer: Sequential Transformer for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the Mask2Former (ResNet-50) model in the Mask2Former for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the InstanceFormer(ResNet-50) model in the InstanceFormer: An Online Video Instance Segmentation Framework paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the SeqFormer (ResNet-50) model in the SeqFormer: Sequential Transformer for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the DeVIS (ResNet-50) model in the DeVIS: Making Deformable Transformers Work for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the IFC (ResNet-50) model in the Video Instance Segmentation using Inter-Frame Communication Transformers paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the VisTR(ResNet-101) model in the End-to-End Video Instance Segmentation with Transformers paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the VSTAM model in the Video Sparse Transformer With Attention-Guided Memory for Video Object Detection paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the STC (ResNet-50) model in the STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the CrossVIS (ResNet-101) model in the Crossover Learning for Fast Online Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the VisTR(ResNet-50) model in the End-to-End Video Instance Segmentation with Transformers paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the PCAN(ResNet-50) model in the Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the ObjProp (ResNet-50) model in the Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the CompFeat(ResNet-50) model in the CompFeat: Comprehensive Feature Aggregation for Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the CSipMask model in the Occluded Video Instance Segmentation: A Benchmark paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the STEm-Seg (ResNet-101) model in the STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the SipMask (ResNet-50, ms-train, single-scale test) model in the SipMask: Spatial Information Preservation for Fast Image and Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the TraDeS model in the Track to Detect and Segment: An Online Multi-Object Tracker paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the SipMask (ResNet-50, single-scale test) model in the SipMask: Spatial Information Preservation for Fast Image and Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the CMaskTrack R-CNN model in the Occluded Video Instance Segmentation: A Benchmark paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the STEm-Seg (ResNet-50) model in the STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the MaskTrack R-CNN (ResNet-50, single-scale training and test) model in the Video Instance Segmentation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the UniTrack model in the Do Different Tracking Tasks Require Different Appearance Models? paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the OSMN model in the Efficient Video Object Segmentation via Network Modulation paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the DeepSORT model in the Simple Online and Realtime Tracking with a Deep Association Metric paper on the YouTube-VIS validation dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the MaskFreeVIS model in the Mask-Free Video Instance Segmentation paper on the Youtube-VIS (trained with no video masks) dataset? | AP |
What metrics were used to measure the VMT (Swin-L) model in the Video Mask Transfiner for High-Quality Video Instance Segmentation paper on the HQ-YTVIS dataset? | Tube-Boundary AP |
What metrics were used to measure the SeqFormer (Swin-L) model in the SeqFormer: Sequential Transformer for Video Instance Segmentation paper on the HQ-YTVIS dataset? | Tube-Boundary AP |
What metrics were used to measure the VMT (R101) model in the Video Mask Transfiner for High-Quality Video Instance Segmentation paper on the HQ-YTVIS dataset? | Tube-Boundary AP |
What metrics were used to measure the VMT (R50) model in the Video Mask Transfiner for High-Quality Video Instance Segmentation paper on the HQ-YTVIS dataset? | Tube-Boundary AP |
What metrics were used to measure the Temporal ROI Align model in the Temporal RoI Align for Video Object Recognition paper on the YouTube-VIS dataset? | mask AP |
What metrics were used to measure the RefineVIS (Swin-L, online) model in the RefineVIS: Video Instance Segmentation with Temporal Attention Refinement paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the GRAtt-VIS (Swin-L) model in the GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the TarViS (Swin-L) model in the TarViS: A Unified Approach for Target-based Video Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the DVIS(Swin-L) model in the DVIS: Decoupled Video Instance Segmentation Framework paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the GenVIS (Swin-L) model in the A Generalized Framework for Video Instance Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the NOVIS (Swin-L) model in the NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the Tube-Link(Swin-L) model in the Tube-Link: A Flexible Cross Tube Framework for Universal Video Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the VITA (Swin-L) model in the VITA: Video Instance Segmentation via Object Token Association paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the IDOL (Swin-L) model in the In Defense of Online Models for Video Instance Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the MinVIS (Swin-L) model in the MinVIS: A Minimal Video Instance Segmentation Framework without Video-based Training paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the DeVIS (Swin-L) model in the DeVIS: Making Deformable Transformers Work for Video Instance Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the InstanceFormer (Swin-L) model in the InstanceFormer: An Online Video Instance Segmentation Framework paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the TarViS (Swin-T) model in the TarViS: A Unified Approach for Target-based Video Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the GRAtt-VIS (ResNet-50) model in the GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the TarViS (ResNet-50) model in the TarViS: A Unified Approach for Target-based Video Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the NOVIS (ResNet-50) model in the NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the DeVIS (ResNet-50) model in the DeVIS: Making Deformable Transformers Work for Video Instance Segmentation paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the InstanceFormer (ResNet-50) model in the InstanceFormer: An Online Video Instance Segmentation Framework paper on the YouTube-VIS 2021 dataset? | mask AP, AP50, AP75, AR1, AR10 |
What metrics were used to measure the DVIS(Swin-L, Offline) model in the DVIS: Decoupled Video Instance Segmentation Framework paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the UNINEXT (ViT-H, Online) model in the Universal Instance Perception as Object Discovery and Retrieval paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the DVIS(Swin-L, Online) model in the DVIS: Decoupled Video Instance Segmentation Framework paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the CTVIS (Swin-L) model in the CTVIS: Consistent Training for Online Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the RefineVIS (Swin-L, offline) model in the RefineVIS: Video Instance Segmentation with Temporal Attention Refinement paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the GRAtt-VIS (Swin-L) model in the GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the GenVIS (Swin-L) model in the A Generalized Framework for Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the NOVIS (Swin-L) model in the NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the TarViS (Swin-L) model in the TarViS: A Unified Approach for Target-based Video Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the IDOL (Swin-L) model in the In Defense of Online Models for Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the ROVIS (Swin-L) model in the Robust Online Video Instance Segmentation with Track Queries paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the MinVIS (Swin-L) model in the MinVIS: A Minimal Video Instance Segmentation Framework without Video-based Training paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the GRAtt-VIS (ResNet-50) model in the GRAtt-VIS: Gated Residual Attention for Auto Rectifying Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the CTVIS (ResNet-50) model in the CTVIS: Consistent Training for Online Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the DeVIS (Swin-L) model in the DeVIS: Making Deformable Transformers Work for Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the UNINEXT (ResNet-50, Online) model in the Universal Instance Perception as Object Discovery and Retrieval paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the TarViS (Swin-T) model in the TarViS: A Unified Approach for Target-based Video Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the NOVIS (ResNet-50) model in the NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the TarViS (ResNet-50) model in the TarViS: A Unified Approach for Target-based Video Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the IDOL (ResNet-50) model in the In Defense of Online Models for Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the Tube-Link(ResNet-50) model in the Tube-Link: A Flexible Cross Tube Framework for Universal Video Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the VITA (Swin-L) model in the VITA: Video Instance Segmentation via Object Token Association paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the DeVIS (ResNet-50) model in the DeVIS: Making Deformable Transformers Work for Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the InstanceFormer (Swin-L) model in the InstanceFormer: An Online Video Instance Segmentation Framework paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the InstanceFormer(ResNet-50) model in the InstanceFormer: An Online Video Instance Segmentation Framework paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the CrossVIS (ResNet-50, calibration) model in the Crossover Learning for Fast Online Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the TeViT (ResNet-50) model in the Temporally Efficient Vision Transformer for Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the Mask2Former-VIS model in the Mask2Former for Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the STC (ResNet-50) model in the STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the CMaskTrack R-CNN (ResNet-50) model in the Occluded Video Instance Segmentation: A Benchmark paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the D2Conv3D (ResNet-50) model in the D2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the CrossVIS (ResNet-50) model in the Crossover Learning for Fast Online Video Instance Segmentation paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the CSipMask (ResNet-50) model in the Occluded Video Instance Segmentation: A Benchmark paper on the OVIS validation dataset? | mask AP, AP50, AP75, APho, APmo, AR1, APso, AR10 |
What metrics were used to measure the model in the Visual Speech Enhancement Without A Real Visual Stream paper on the LRS2+VGGSound dataset? | CBAK, COVL, CSIG, PESQ, STOI |
What metrics were used to measure the model in the Visual Speech Enhancement Without A Real Visual Stream paper on the LRS3+VGGSound dataset? | CBAK, COVL, CSIG, PESQ, STOI |
What metrics were used to measure the BiLSTM (SparkNLP) model in the Improving Clinical Document Understanding on COVID-19 Research with Spark NLP paper on the 2010 i2b2/VA dataset? | Micro F1 |
What metrics were used to measure the BERTlarge (MIMIC) model in the Enhancing Clinical Concept Extraction with Contextual Embeddings paper on the 2010 i2b2/VA dataset? | Exact Span F1 |
What metrics were used to measure the CharacterBERT (base, medical) model in the CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters paper on the 2010 i2b2/VA dataset? | Exact Span F1 |
What metrics were used to measure the ClinicalBERT model in the Cost-effective Selection of Pretraining Data: A Case Study of Pretraining BERT on Social Media paper on the 2010 i2b2/VA dataset? | Exact Span F1 |
What metrics were used to measure the ELMo (finetuned on i2b2) + word2vec (i2b2) model in the Embedding Strategies for Specialized Domains: Application to Clinical Entity Recognition paper on the 2010 i2b2/VA dataset? | Exact Span F1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.