prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the RITnet model in the RITnet: Real-time Semantic Segmentation of the Eye for Gaze Tracking paper on the OpenEDS dataset? | mIOU |
What metrics were used to measure the ACLNet model in the ACLNet: An Attention and Clustering-based Cloud Segmentation Network paper on the SWINSEG dataset? | Average Precision, Average Recall, F1-Score, MCC, Mean IoU |
What metrics were used to measure the ScribbleVC model in the ScribbleVC: Scribble-supervised Medical Image Segmentation with Vision-Class Embedding paper on the ACDC Scribbles dataset? | Dice (Average) |
What metrics were used to measure the CycleMix model in the CycleMix: A Holistic Strategy for Medical Image Segmentation from Scribble Supervision paper on the ACDC Scribbles dataset? | Dice (Average) |
What metrics were used to measure the CutMix model in the CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features paper on the ACDC Scribbles dataset? | Dice (Average) |
What metrics were used to measure the TFCNs model in the TFCNs: A CNN-Transformer Hybrid Network for Medical Image Segmentation paper on the ACDC Scribbles dataset? | Dice (Average) |
What metrics were used to measure the Puzzle Mix model in the Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup paper on the ACDC Scribbles dataset? | Dice (Average) |
What metrics were used to measure the MIC model in the MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the HRDA + PiPa model in the PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain Adaptative Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the HRDA model in the HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the SePiCo model in the SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the DAFormer + ProCST model in the ProCST: Boosting Semantic Segmentation Using Progressive Cyclic Style-Transfer paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the DAFormer model in the DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the TransDA-B model in the Smoothing Matters: Momentum Transformer for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the G2L model in the G2L: A Global to Local Alignment Method for Unsupervised Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the ProDA+CRA model in the Cross-Region Domain Adaptation for Class-level Alignment paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the ProDA model in the Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the CCM model in the Content-Consistent Matching for Domain Adaptive Semantic Segmentation paper on the GTAV-to-Cityscapes Labels dataset? | mIoU |
What metrics were used to measure the CMNeXt (RGB-D-E-LiDAR) model in the Delivering Arbitrary-Modal Semantic Segmentation paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the CMNeXt (RGB-D-LiDAR) model in the Delivering Arbitrary-Modal Semantic Segmentation paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the CMX (RGB-Depth) model in the CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the CMX (RGB-LiDAR) model in the CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the CMNeXt (RGB-Depth) model in the Delivering Arbitrary-Modal Semantic Segmentation paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the ACNet (ResNet50) model in the ACNet: Attention Based Network to Exploit Complementary Features for RGBD Semantic Segmentation paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the CMNeXt (RGB-LiDAR) model in the Delivering Arbitrary-Modal Semantic Segmentation paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the TokenFusion (RGB-Depth) model in the Multimodal Token Fusion for Vision Transformers paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the ISSAFE (ResNet50) model in the ISSAFE: Improving Semantic Segmentation in Accidents by Fusing Event-based Data paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the TransFuser (RGB-LiDAR) model in the Multi-Modal Fusion Transformer for End-to-End Autonomous Driving paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the TokenFusion (RGB-LiDAR) model in the Multimodal Token Fusion for Vision Transformers paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the PMF (RGB-LiDAR) model in the Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the ISSAFE (ResNet18) model in the ISSAFE: Improving Semantic Segmentation in Accidents by Fusing Event-based Data paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the HRFuser (RGB-D-LiDAR) model in the HRFuser: A Multi-resolution Sensor Fusion Architecture for 2D Object Detection paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the HRFuser (RGB-Depth) model in the HRFuser: A Multi-resolution Sensor Fusion Architecture for 2D Object Detection paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the HRFuser (RGB-LiDAR) model in the HRFuser: A Multi-resolution Sensor Fusion Architecture for 2D Object Detection paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the PGSNet (RGB-D-LiDAR) model in the Glass Segmentation Using Intensity and Spectral Polarization Cues paper on the KITTI-360 dataset? | mIoU |
What metrics were used to measure the SFSS-MMSI (RGB+Depth+Normal) model in the Single Frame Semantic Segmentation Using Multi-Modal Spherical Images paper on the Structured3D dataset? | Test mIoU, Validation mIoU |
What metrics were used to measure the SFSS-MMSI (RGB+Normal) model in the Single Frame Semantic Segmentation Using Multi-Modal Spherical Images paper on the Structured3D dataset? | Test mIoU, Validation mIoU |
What metrics were used to measure the SFSS-MMSI (RGB+Depth) model in the Single Frame Semantic Segmentation Using Multi-Modal Spherical Images paper on the Structured3D dataset? | Test mIoU, Validation mIoU |
What metrics were used to measure the SFSS-MMSI (RGB Only) model in the Single Frame Semantic Segmentation Using Multi-Modal Spherical Images paper on the Structured3D dataset? | Test mIoU, Validation mIoU |
What metrics were used to measure the SMMCL (SegNeXt-B) model in the Understanding Dark Scenes by Contrasting Multi-Modal Observations paper on the LLRGBD-synthetic dataset? | mIoU |
What metrics were used to measure the SMMCL (SegFormer-B2) model in the Understanding Dark Scenes by Contrasting Multi-Modal Observations paper on the LLRGBD-synthetic dataset? | mIoU |
What metrics were used to measure the CMX (SegFormer-B2) model in the CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers paper on the LLRGBD-synthetic dataset? | mIoU |
What metrics were used to measure the TokenFusion (SegFormer-B2) model in the Multimodal Token Fusion for Vision Transformers paper on the LLRGBD-synthetic dataset? | mIoU |
What metrics were used to measure the SMMCL (ResNet-101) model in the Understanding Dark Scenes by Contrasting Multi-Modal Observations paper on the LLRGBD-synthetic dataset? | mIoU |
What metrics were used to measure the ShapeConv (ResNeXt-101) model in the ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation paper on the LLRGBD-synthetic dataset? | mIoU |
What metrics were used to measure the CEN (ResNet-101) model in the Channel Exchanging Networks for Multimodal and Multitask Dense Image Prediction paper on the LLRGBD-synthetic dataset? | mIoU |
What metrics were used to measure the SA-Gate (ResNet-101) model in the Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation paper on the LLRGBD-synthetic dataset? | mIoU |
What metrics were used to measure the MFSNet model in the MFSNet: A Multi Focus Segmentation Network for Skin Lesion Segmentation paper on the ISIC 2017 dataset? | Average Dice |
What metrics were used to measure the Erfani et al. model in the ATLANTIS: A Benchmark for Semantic Segmentation of Waterbody Images paper on the ATLANTIS dataset? | A-acc, A-mIoU, Accuracy, mIoU |
What metrics were used to measure the RPVNet [xu2021rpvnet] model in the Spherical Transformer for LiDAR-based 3D Recognition paper on the KITTI Semantic Segmentation dataset? | Mean IoU (class), Category IoU, Category iIoU, class iIoU |
What metrics were used to measure the DeepLabV3Plus + SDCNetAug model in the Improving Semantic Segmentation via Video Propagation and Label Relaxation paper on the KITTI Semantic Segmentation dataset? | Mean IoU (class), Category IoU, Category iIoU, class iIoU |
What metrics were used to measure the MapillaryAI model in the In-Place Activated BatchNorm for Memory-Optimized Training of DNNs paper on the KITTI Semantic Segmentation dataset? | Mean IoU (class), Category IoU, Category iIoU, class iIoU |
What metrics were used to measure the SIW model in the The devil is in the labels: Semantic segmentation from sentences paper on the KITTI Semantic Segmentation dataset? | Mean IoU (class), Category IoU, Category iIoU, class iIoU |
What metrics were used to measure the AHiSS model in the Training of Convolutional Networks on Multiple Heterogeneous Datasets for Street Scene Semantic Segmentation paper on the KITTI Semantic Segmentation dataset? | Mean IoU (class), Category IoU, Category iIoU, class iIoU |
What metrics were used to measure the SegStereo model in the SegStereo: Exploiting Semantic Information for Disparity Estimation paper on the KITTI Semantic Segmentation dataset? | Mean IoU (class), Category IoU, Category iIoU, class iIoU |
What metrics were used to measure the APMoE_seg model in the Pixel-wise Attentional Gating for Parsimonious Pixel Labeling paper on the KITTI Semantic Segmentation dataset? | Mean IoU (class), Category IoU, Category iIoU, class iIoU |
What metrics were used to measure the SCF-Net model in the SCF-Net: Learning Spatial Contextual Features for Large-Scale Point Cloud Segmentation paper on the Toronto-3D L002 dataset? | oAcc, mIoU |
What metrics were used to measure the PointNet++ model in the PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space paper on the Toronto-3D L002 dataset? | oAcc, mIoU |
What metrics were used to measure the RandLA-Net model in the RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds paper on the Toronto-3D L002 dataset? | oAcc, mIoU |
What metrics were used to measure the SFSS-MMSI (RGB+Depth) model in the Single Frame Semantic Segmentation Using Multi-Modal Spherical Images paper on the Matterport3D dataset? | Test mIoU, Validation mIoU |
What metrics were used to measure the SFSS-MMSI (RGB+Normal) model in the Single Frame Semantic Segmentation Using Multi-Modal Spherical Images paper on the Matterport3D dataset? | Test mIoU, Validation mIoU |
What metrics were used to measure the SFSS-MMSI (RGB+Depth+Normal) model in the Single Frame Semantic Segmentation Using Multi-Modal Spherical Images paper on the Matterport3D dataset? | Test mIoU, Validation mIoU |
What metrics were used to measure the SFSS-MMSI (RGB Only) model in the Single Frame Semantic Segmentation Using Multi-Modal Spherical Images paper on the Matterport3D dataset? | Test mIoU, Validation mIoU |
What metrics were used to measure the CMNeXt (RGB-LF80) model in the Delivering Arbitrary-Modal Semantic Segmentation paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the CMNeXt (RGB-LF33) model in the Delivering Arbitrary-Modal Semantic Segmentation paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the CMNeXt (RGB-LF8) model in the Delivering Arbitrary-Modal Semantic Segmentation paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the SA-Gate model in the Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the ESANet model in the Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the OCR (HRNetV2-W48) model in the U-Net: Convolutional Networks for Biomedical Image Segmentation paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the MTINet (HRNetV2-W48) model in the MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the SegFormer model in the SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the SETR (ViT-Large) model in the Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the TMANet model in the Temporal Memory Attention for Video Semantic Segmentation paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the PSPNet model in the Pyramid Scene Parsing Network paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the DeepLabV3+ (ResNet-101) model in the Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the TDNet (ResNet-50) model in the Temporally Distributed Networks for Fast Video Semantic Segmentation paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the DAVSS model in the Video Semantic Segmentation with Distortion-Aware Feature Correction paper on the UrbanLF dataset? | mIoU (Syn), mIoU (Real) |
What metrics were used to measure the PatchFormer model in the PatchFormer: An Efficient Point Transformer with Patch Attention paper on the ShapeNet dataset? | Mean IoU |
What metrics were used to measure the SGPN model in the SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation paper on the ShapeNet dataset? | Mean IoU |
What metrics were used to measure the JSNet model in the JSNet: Joint Instance and Semantic Segmentation of 3D Point Clouds paper on the ShapeNet dataset? | Mean IoU |
What metrics were used to measure the Point-PlaneNet model in the Point-PlaneNet: Plane kernel based convolutional neural network for point clouds analysis paper on the ShapeNet dataset? | Mean IoU |
What metrics were used to measure the PointNet++ model in the PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space paper on the ShapeNet dataset? | Mean IoU |
What metrics were used to measure the UniHCP (finetune) model in the UniHCP: A Unified Model for Human-Centric Perceptions paper on the LIP val dataset? | mIoU |
What metrics were used to measure the SOLIDER model in the Beyond Appearance: a Semantic Controllable Self-Supervised Learning Framework for Human-Centric Visual Tasks paper on the LIP val dataset? | mIoU |
What metrics were used to measure the HRNetV2 + OCR + RMI (PaddleClas pretrained) model in the Segmentation Transformer: Object-Contextual Representations for Semantic Segmentation paper on the LIP val dataset? | mIoU |
What metrics were used to measure the OCR (HRNetV2-W48) model in the Segmentation Transformer: Object-Contextual Representations for Semantic Segmentation paper on the LIP val dataset? | mIoU |
What metrics were used to measure the HRNetV2 (HRNetV2-W48) model in the High-Resolution Representations for Labeling Pixels and Regions paper on the LIP val dataset? | mIoU |
What metrics were used to measure the OCR (ResNet-101) model in the Segmentation Transformer: Object-Contextual Representations for Semantic Segmentation paper on the LIP val dataset? | mIoU |
What metrics were used to measure the CE2P (ResNet-101) model in the Devil in the Details: Towards Accurate Single and Multiple Human Parsing paper on the LIP val dataset? | mIoU |
What metrics were used to measure the JPPNet (ResNet-101) model in the Look into Person: Joint Body Parsing & Pose Estimation Network and A New Benchmark paper on the LIP val dataset? | mIoU |
What metrics were used to measure the MuLA (ResNet-101) model in the Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation paper on the LIP val dataset? | mIoU |
What metrics were used to measure the MMAN (ResNet-101) model in the Macro-Micro Adversarial Network for Human Parsing paper on the LIP val dataset? | mIoU |
What metrics were used to measure the Attention+SSL (ResNet-101) model in the Look into Person: Self-supervised Structure-sensitive Learning and A New Benchmark for Human Parsing paper on the LIP val dataset? | mIoU |
What metrics were used to measure the InternImage-H model in the InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions paper on the Cityscapes val dataset? | mIoU, FPS |
What metrics were used to measure the HRNetV2-OCR+PSA model in the Polarized Self-Attention: Towards High-quality Pixel-wise Regression paper on the Cityscapes val dataset? | mIoU, FPS |
What metrics were used to measure the InternImage-XL model in the InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions paper on the Cityscapes val dataset? | mIoU, FPS |
What metrics were used to measure the HRNet-OCR model in the Hierarchical Multi-Scale Attention for Semantic Segmentation paper on the Cityscapes val dataset? | mIoU, FPS |
What metrics were used to measure the ViT-Adapter-L (Mask2Former, BEiT pretrain, Mapillary) model in the Vision Transformer Adapter for Dense Predictions paper on the Cityscapes val dataset? | mIoU, FPS |
What metrics were used to measure the OneFormer (ConvNeXt-XL, Mapillary, multi-scale) model in the OneFormer: One Transformer to Rule Universal Image Segmentation paper on the Cityscapes val dataset? | mIoU, FPS |
What metrics were used to measure the SeMask (SeMask Swin-L Mask2Former) model in the SeMask: Semantically Masked Transformers for Semantic Segmentation paper on the Cityscapes val dataset? | mIoU, FPS |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.