prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the OneFormer (DiNAT-L, multi-scale) model in the OneFormer: One Transformer to Rule Universal Image Segmentation paper on the Mapillary val dataset? | mIoU |
What metrics were used to measure the Mask2Former (Swin-L, multiscale) model in the Masked-attention Mask Transformer for Universal Image Segmentation paper on the Mapillary val dataset? | mIoU |
What metrics were used to measure the MaskFormer (ResNet-50) model in the Per-Pixel Classification is Not All You Need for Semantic Segmentation paper on the Mapillary val dataset? | mIoU |
What metrics were used to measure the NiseNet model in the What's There in the Dark paper on the Mapillary val dataset? | mIoU |
What metrics were used to measure the SegBlocks-RN50 (t=0.4) model in the SegBlocks: Block-Based Dynamic Resolution Networks for Real-Time Segmentation paper on the Mapillary val dataset? | mIoU |
What metrics were used to measure the FPN EfficientNet-B4 w/ Aux loss model in the dacl10k: Benchmark for Semantic Bridge Damage Segmentation paper on the dacl10k v1 testdev dataset? | mIoU |
What metrics were used to measure the DeepLabv3+ EfficientNet-B4 model in the dacl10k: Benchmark for Semantic Bridge Damage Segmentation paper on the dacl10k v1 testdev dataset? | mIoU |
What metrics were used to measure the SegFormer mit-b1 model in the dacl10k: Benchmark for Semantic Bridge Damage Segmentation paper on the dacl10k v1 testdev dataset? | mIoU |
What metrics were used to measure the SIW model in the The devil is in the labels: Semantic segmentation from sentences paper on the WildDash dataset? | Mean IoU |
What metrics were used to measure the NiseNet model in the What's There in the Dark paper on the BDD100K val dataset? | mIoU |
What metrics were used to measure the ICT-Net model in the Semantic Segmentation from Remote Sensor Data and the Exploitation of Latent Learning for Classification of Auxiliary Tasks paper on the AIRS dataset? | IoU |
What metrics were used to measure the TIMF model in the GAMUS: A Geometry-aware Multi-modal Semantic Segmentation Benchmark for Remote Sensing Data paper on the GAMUS dataset? | mIoU |
What metrics were used to measure the CMX model in the CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers paper on the GAMUS dataset? | mIoU |
What metrics were used to measure the VCD model in the Variational Context-Deformable ConvNets for Indoor Scene Parsing paper on the GAMUS dataset? | mIoU |
What metrics were used to measure the RTFNet model in the RTFNet: RGB-Thermal Fusion Network for Semantic Segmentation of Urban Scenes paper on the GAMUS dataset? | mIoU |
What metrics were used to measure the ShapeConv model in the ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation paper on the GAMUS dataset? | mIoU |
What metrics were used to measure the MFNet model in the MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes paper on the GAMUS dataset? | mIoU |
What metrics were used to measure the PlainSeg (EVA-02-L) model in the Minimalist and High-Performance Semantic Segmentation with Plain Vision Transformers paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the InternImage-H model in the InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the RSSeg-ViT-L (BEiT pretrain) model in the Representation Separation for Semantic Segmentation with Vision Transformers paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the ViT-Adapter-L (Mask2Former, BEiT pretrain) model in the Vision Transformer Adapter for Dense Predictions paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the ViT-Adapter-L (UperNet, BEiT pretrain) model in the Vision Transformer Adapter for Dense Predictions paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the RSSeg-ViT-L model in the Representation Separation for Semantic Segmentation with Vision Transformers paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the SegViT (ours) model in the SegViT: Semantic Segmentation with Plain Vision Transformers paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CAA + CAR (ConvNeXt-Large + JPU) model in the CAR: Class-aware Regularizations for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the SenFormer (Swin-L) model in the Efficient Self-Ensemble for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the Sequential Ensemble (Segformer + HRNet) model in the Sequential Ensembling for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CAA + Simple decoder (Efficientnet-B7) model in the Channelized Axial Attention for Semantic Segmentation -- Considering Channel Relation within Spatial Attention for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the DPT-Hybrid model in the Vision Transformers for Dense Prediction paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CAA (Efficientnet-B7) model in the Channelized Axial Attention for Semantic Segmentation -- Considering Channel Relation within Spatial Attention for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the HRNetV2 + OCR + RMI (PaddleClas pretrained) model in the Segmentation Transformer: Object-Contextual Representations for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the Seg-L-Mask/16 model in the Segmenter: Transformer for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the ResNeSt-269 model in the ResNeSt: Split-Attention Networks paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the ResNeSt-200 model in the ResNeSt: Split-Attention Networks paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CondNet(ResNest-101) model in the CondNet: Conditional Classifier for Scene Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the SenFormer (ResNet-101) model in the Efficient Self-Ensemble for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the ResNeSt-101 model in the ResNeSt: Split-Attention Networks paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the OCR (HRNetV2-W48) model in the Segmentation Transformer: Object-Contextual Representations for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the GPaCo (ResNet101) model in the Generalized Parametric Contrastive Learning paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CondNet(ResNet-101) model in the CondNet: Conditional Classifier for Scene Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the SETR-MLA (16, 80k, MS) model in the Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the DCNAS model in the DCNAS: Densely Connected Neural Architecture Search for Semantic Image Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the DRAN(ResNet-101) model in the Scene Segmentation with Dual Relation-aware Attention Network paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the DNL model in the Disentangled Non-Local Neural Networks paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the HamNet (ResNet-101) model in the Is Attention Better Than Matrix Decomposition? paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CAA (ResNet-101) model in the Channelized Axial Attention for Semantic Segmentation -- Considering Channel Relation within Spatial Attention for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the OCR (ResNet-101) model in the Segmentation Transformer: Object-Contextual Representations for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the SIW(Segformer-B5) model in the The devil is in the labels: Semantic segmentation from sentences paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CFNet (ResNet-101) model in the Co-Occurrent Features in Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CFNet (ResNet-101) model in the Deep High-Resolution Representation Learning for Visual Recognition paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the HRNetV2 HRNetV2-W48 model in the Deep High-Resolution Representation Learning for Visual Recognition paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CPN(ResNet-101) model in the Context Prior for Scene Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the LaU-regression-loss (ResNet-101) model in the Location-aware Upsampling for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the DGCNet (MS, ResNet-101) model in the Dual Graph Convolutional Network for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the BFP model in the Boundary-Aware Feature Propagation for Scene Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the SVCNet (ResNet-101) model in the Semantic Correlation Promoted Shape-Variant Context for Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the Joint Pyramid Upsampling + EncNet model in the FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the EMANet model in the Expectation-Maximization Attention Networks for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the Asymmetric ALNN model in the Asymmetric Non-local Neural Networks for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CASSOD model in the CASSOD-Net: Cascaded and Separable Structures of Dilated Convolution for Embedded Vision Systems and Applications paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the DANet (ResNet-101) model in the Dual Attention Network for Scene Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the ICM model in the Scene Parsing via Integrated Classification Model and Variance-Based Regularization paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the DUpsampling model in the Decoders Matter for Semantic Segmentation: Data-Dependent Decoding Enables Flexible Feature Aggregation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the EncNet (ResNet-101) model in the Context Encoding for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CFNet (ResNet-50) model in the Co-Occurrent Features in Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the ResNet-38 model in the Wider or Deeper: Revisiting the ResNet Model for Visual Recognition paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the PSPNet (ResNet-101) model in the Pyramid Scene Parsing Network paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the RefineNet model in the RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the DeepLabV2 model in the DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the VeryDeep model in the Bridging Category-level and Instance-level Semantic Image Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the Piecewise model in the Efficient piecewise training of deep structured models for semantic segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the Dilated-FCN2s model in the Efficient Yet Deep Convolutional Neural Networks for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the HO CRF model in the Higher Order Conditional Random Fields in Deep Neural Networks paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the BoxSup model in the BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the ParseNet model in the ParseNet: Looking Wider to See Better paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CRF-RNN model in the Conditional Random Fields as Recurrent Neural Networks paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the FCN-8s model in the Fully Convolutional Networks for Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the CFM model in the Convolutional Feature Masking for Joint Object and Stuff Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the RBE2E model in the Region-based semantic segmentation with end-to-end training paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the SegCLIP model in the SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation paper on the PASCAL Context dataset? | mIoU, Mean Accuracy, Pixel Accuracy |
What metrics were used to measure the Refign (HRDA) model in the Refign: Align and Refine for Adaptation of Semantic Segmentation to Adverse Conditions paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the MIC model in the MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the Refign (DAFormer) model in the Refign: Align and Refine for Adaptation of Semantic Segmentation to Adverse Conditions paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the HRDA model in the HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the SePiCo model in the SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the DAFormer model in the DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the GPS-GLASS model in the GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and GPS data paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the SePiCo (DeepLab v2 ResNet-101) model in the SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the MGCDA model in the Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the DANNet (DeepLab v2 ResNet-101) model in the DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the GCMA model in the Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the DANNet model in the DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the CIConv model in the Zero-Shot Day-Night Domain Adaptation with a Physics Prior paper on the Dark Zurich dataset? | mIoU |
What metrics were used to measure the SSMA model in the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation paper on the SYNTHIA-CVPR’16 dataset? | Mean IoU |
What metrics were used to measure the AdapNet++ model in the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation paper on the SYNTHIA-CVPR’16 dataset? | Mean IoU |
What metrics were used to measure the GA-Nav model in the GANav: Efficient Terrain Segmentation for Robot Navigation in Unstructured Outdoor Environments paper on the RUGD dataset? | AIOU, mIoU |
What metrics were used to measure the FasterSeg model in the FasterSeg: Searching for Faster Real-time Semantic Segmentation paper on the BDD dataset? | mIoU |
What metrics were used to measure the Trans4PASS+ model in the Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation paper on the SynPASS dataset? | mIoU |
What metrics were used to measure the Trans4PASS model in the Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation paper on the SynPASS dataset? | mIoU |
What metrics were used to measure the SegFomrer model in the SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers paper on the SynPASS dataset? | mIoU |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.