prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the HRNetV2 (train+val) model in the Deep High-Resolution Representation Learning for Visual Recognition paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DANet (ResNet-101) model in the Dual Attention Network for Scene Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the CCNet model in the CCNet: Criss-Cross Attention for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the BFP model in the Boundary-Aware Feature Propagation for Scene Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DeepLabv3 (ResNet-101, coarse) model in the Rethinking Atrous Convolution for Semantic Image Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the CPN(ResNet-101) model in the Context Prior for Scene Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the Asymmetric ALNN model in the Asymmetric Non-local Neural Networks for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the AdapNet++ model in the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the SVCNet (ResNet-101) model in the Semantic Correlation Promoted Shape-Variant Context for Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the D3Net-L model in the Densely connected multidilated convolutional networks for dense prediction tasks paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DenseASPP (DenseNet-161) model in the DenseASPP for Semantic Segmentation in Street Scenes paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the Smooth Network with Channel Attention Block model in the Learning a Discriminative Feature Network for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the PSPNet++ model in the Pyramid Scene Parsing Network paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the PSANet (ResNet-101) model in the PSANet: Point-wise Spatial Attention Network for Scene Parsing paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ESANet-R34-NBt1D model in the Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DeepLabV3 with R-101 model in the Resolution-Aware Design of Atrous Rates for Semantic Segmentation Networks paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DFN (ResNet-101) model in the Learning a Discriminative Feature Network for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the AAF (ResNet-101) model in the Adaptive Affinity Fields for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ShelfNet-34 model in the ShelfNet for Fast Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the BiSeNet (ResNet-101) model in the BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ResNet-38 model in the Wider or Deeper: Revisiting the ResNet Model for Visual Recognition paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the PSPNet model in the Pyramid Scene Parsing Network paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DepthSeg (ResNet-101) model in the Recurrent Scene Parsing with Perspective Understanding in the Loop paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DSSPN (ResNet-101) model in the Dynamic-structured Semantic Propagation Network paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DUC-HDC (ResNet-101) model in the Understanding Convolution for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the SaGe model in the Semantic-Aware Generation for Self-Supervised Visual Representation Learning paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the SwiftNetRN-18 model in the In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the RefineNet (ResNet-101) model in the RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the MobileNet V3-Large 1.0 model in the Searching for MobileNetV3 paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the SqueezeNAS (LAT Large) model in the SqueezeNAS: Fast neural architecture search for faster semantic segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the Multi Scale Spatial Attention model in the Semantic Segmentation With Multi Scale Spatial Attention For Self Driving Cars paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the FRRN model in the Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the LRR-4x model in the Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the Context model in the Efficient piecewise training of deep structured models for semantic segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the FasterSeg model in the FasterSeg: Searching for Faster Real-time Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DFANet A model in the DFANet: Deep Feature Aggregation for Real-Time Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the LDFNet model in the Incorporating Luminance, Depth and Color Information by a Fusion-based Network for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the LightSeg-DarkNet19 model in the LiteSeg: A Novel Lightweight ConvNet for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ESNet model in the ESNet: An Efficient Symmetric Network for Real-time Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ICNet model in the ICNet for Real-Time Semantic Segmentation on High-Resolution Images paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the LEDNet model in the LEDNet: A Lightweight Encoder-Decoder Network for Real-Time Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the WASPnet (ours) model in the Waterfall Atrous Spatial Pooling Architecture for Efficient Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DeepLab-CRF (ResNet-101) model in the DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ERFNet (PyTorch) model in the ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the Fast-SCNN model in the Fast-SCNN: Fast Semantic Segmentation Network paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the LightSeg-MobileNet model in the LiteSeg: A Novel Lightweight ConvNet for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the LiteSeg-MobileNet model in the LiteSeg: A Novel Lightweight ConvNet for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the Template-Based NAS-arch1 model in the Template-Based Automatic Search of Compact Semantic Segmentation Architectures paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the Template-Based NAS-arch0 model in the Template-Based Automatic Search of Compact Semantic Segmentation Architectures paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the EDANet model in the Efficient Dense Modules of Asymmetric Convolution for Real-Time Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the Dilation10 model in the Multi-Scale Context Aggregation by Dilated Convolutions paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DPN model in the Semantic Image Segmentation via Deep Parsing Network paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the SqueezeNAS (LAT Small) model in the SqueezeNAS: Fast neural architecture search for faster semantic segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the SINet model in the SINet: Extreme Lightweight Portrait Segmentation Networks with Spatial Squeeze Modules and Information Blocking Decoder paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ESPNetv2 model in the ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the FCN model in the Fully Convolutional Networks for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the LightSeg-ShuffleNet model in the LiteSeg: A Novel Lightweight ConvNet for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the LiteSeg-ShuffleNet model in the LiteSeg: A Novel Lightweight ConvNet for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the DeepLab model in the Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ENet + Lovász-Softmax model in the The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ESPNet model in the ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the ENet model in the ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the SegNet model in the SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the IkshanaNet-1 model in the The Ikshana Hypothesis of Human Scene Understanding paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the IkshanaNet-2 model in the The Ikshana Hypothesis of Human Scene Understanding paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the IkshanaNet-3 model in the The Ikshana Hypothesis of Human Scene Understanding paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the HighResNet [32] model in the A Survey on Deep Learning Techniques for Stereo-based Depth Estimation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the SegStereo [68] model in the A Survey on Deep Learning Techniques for Stereo-based Depth Estimation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the AnyNet [88] model in the A Survey on Deep Learning Techniques for Stereo-based Depth Estimation paper on the Cityscapes test dataset? | Mean IoU (class), Category mIoU, Time (ms) |
What metrics were used to measure the MFSNet model in the MFSNet: A Multi Focus Segmentation Network for Skin Lesion Segmentation paper on the HAM10000 dataset? | Average Dice, Average IOU |
What metrics were used to measure the BEiT-3 model in the Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the EVA model in the EVA: Exploring the Limits of Masked Visual Representation Learning at Scale paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the FD-SwinV2-G model in the Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the MaskDINO-SwinL model in the Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the OneFormer
(InternImage-H, emb_dim=256, multi-scale, 896x896) model in the OneFormer: One Transformer to Rule Universal Image Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the ViT-Adapter-L (Mask2Former, BEiT pretrain) model in the Vision Transformer Adapter for Dense Predictions paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the OneFormer (DiNAT-L, multi-scale, 896x896) model in the OneFormer: One Transformer to Rule Universal Image Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the ViT-Adapter-L (UperNet, BEiT pretrain) model in the Vision Transformer Adapter for Dense Predictions paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the OneFormer (DiNAT-L, multi-scale, 640x640) model in the OneFormer: One Transformer to Rule Universal Image Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the RSSeg-ViT-L(BEiT pretrain) model in the Representation Separation for Semantic Segmentation with Vision Transformers paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the OneFormer (Swin-L, multi-scale, 896x896) model in the OneFormer: One Transformer to Rule Universal Image Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the OneFormer (DiNAT-L, single-scale, 640x640) model in the OneFormer: One Transformer to Rule Universal Image Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the SeMask (SeMask Swin-L FaPN-Mask2Former) model in the SeMask: Semantically Masked Transformers for Semantic Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the SeMask (SeMask Swin-L MSFaPN-Mask2Former) model in the SeMask: Semantically Masked Transformers for Semantic Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the DiNAT-L (Mask2Former) model in the Dilated Neighborhood Attention Transformer paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the Mask2Former (Swin-L-FaPN, multiscale) model in the Masked-attention Mask Transformer for Universal Image Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the OneFormer (Swin-L, multi-scale, 640x640) model in the OneFormer: One Transformer to Rule Universal Image Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the SeMask (SeMask Swin-L Mask2Former) model in the SeMask: Semantically Masked Transformers for Semantic Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the SenFormer (BEiT-L) model in the Efficient Self-Ensemble for Semantic Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the BEiT-L (ViT+UperNet, ImageNet-22k pretrain) model in the BEiT: BERT Pre-Training of Image Transformers paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the SeMask (SeMask Swin-L MSFaPN-Mask2Former, single-scale) model in the SeMask: Semantically Masked Transformers for Semantic Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the FaPN (MaskFormer, Swin-L, ImageNet-22k pretrain) model in the FaPN: Feature-aligned Pyramid Network for Dense Image Prediction paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the Mask2Former (Swin-L-FaPN) model in the Masked-attention Mask Transformer for Universal Image Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the SeMask (SeMask Swin-L MaskFormer) model in the SeMask: Semantically Masked Transformers for Semantic Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the CSWin-L (UperNet, ImageNet-22k pretrain) model in the CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the MaskFormer (Swin-L, ImageNet-22k pretrain) model in the Per-Pixel Classification is Not All You Need for Semantic Segmentation paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the DeiT-L model in the DeiT III: Revenge of the ViT paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the Focal-L (UperNet, ImageNet-22k pretrain) model in the Focal Self-attention for Local-Global Interactions in Vision Transformers paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the SegViT ViT-Large model in the SegViT: Semantic Segmentation with Plain Vision Transformers paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
What metrics were used to measure the PatchDiverse + Swin-L (multi-scale test, upernet, ImageNet22k pretrain) model in the Vision Transformers with Patch Diversification paper on the ADE20K val dataset? | mIoU, Pixel Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.