prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the VAN-Small model in the Visual Attention Network paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the PoolFormer-M48 model in the MetaFormer Is Actually What You Need for Vision paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the UperNet (ResNet-101) model in the Unified Perceptual Parsing for Scene Understanding paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the tiny-MOAT-0 (IN-1K pretraining, single scale) model in the MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the RefineNet model in the RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the FBNetV5 model in the FBNetV5: Neural Architecture Search for Multiple Tasks in One Run paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the ConvMLP-L model in the ConvMLP: Hierarchical Convolutional MLPs for Vision paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the ConvMLP-M model in the ConvMLP: Hierarchical Convolutional MLPs for Vision paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the VAN-Tiny model in the Visual Attention Network paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the A2MIM (ResNet-50) model in the Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the iBOT (ViT-B/16) (linear head) model in the iBOT: Image BERT Pre-Training with Online Tokenizer paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the SegFormer-B0 model in the SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the MUXNet-m + PPM model in the MUXConv: Information Multiplexing in Convolutional Neural Networks paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the ConvMLP-S model in the ConvMLP: Hierarchical Convolutional MLPs for Vision paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the MUXNet-m + C1 model in the MUXConv: Information Multiplexing in Convolutional Neural Networks paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the DilatedNet model in the Multi-Scale Context Aggregation by Dilated Convolutions paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the FCN model in the Fully Convolutional Networks for Semantic Segmentation paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the SegNet model in the SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the InternImage-H (M3I Pre-training) model in the InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions paper on the ADE20K dataset? | Validation mIoU, Test Score, Params (M), GFLOPs (512 x 512), GFLOPs |
What metrics were used to measure the TEC (ViT-B/16, 224x224, SSL+FT, mmseg) model in the Towards Sustainable Self-supervised Learning paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the SERE (ViT-B/16, 100ep, 224x224, SSL+FT) model in the SERE: Exploring Feature Self-relation for Self-supervised Transformer paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the TEC (ViT-B/16, 224x224, SSL+FT) model in the Towards Sustainable Self-supervised Learning paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the MAE (ViT-B/16, 224x224, SSL+FT, mmseg) model in the Masked Autoencoders Are Scalable Vision Learners paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the MAE (ViT-B/16, 224x224, SSL+FT) model in the Masked Autoencoders Are Scalable Vision Learners paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the SERE (ViT-S/16, 100ep, 224x224, SSL+FT, mmseg) model in the SERE: Exploring Feature Self-relation for Self-supervised Transformer paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the SERE (ViT-S/16, 100ep, 224x224, SSL+FT) model in the SERE: Exploring Feature Self-relation for Self-supervised Transformer paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the RF-ConvNext-Tiny (rfmerge, P4, 224x224, SUP) model in the RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the RF-ConvNext-Tiny (rfmultiple, P4, 224x224, SUP) model in the RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the RF-ConvNext-Tiny (rfsingle, P4, 224x224, SUP) model in the RF-Next: Efficient Receptive Field Search for Convolutional Neural Networks paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the ConvNext-Tiny (P4, 224x224, SUP) model in the A ConvNet for the 2020s paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the SERE (ViT-B/16, 100ep, 224x224, SSL) model in the SERE: Exploring Feature Self-relation for Self-supervised Transformer paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the TEC (ViT-B/16, 224x224, SSL, mmseg) model in the Towards Sustainable Self-supervised Learning paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the TEC (ViT-B/16, 224x224, SSL) model in the Towards Sustainable Self-supervised Learning paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the SERE (ViT-S/16, 100ep, 224x224, SSL, mmseg) model in the SERE: Exploring Feature Self-relation for Self-supervised Transformer paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the SERE (ViT-S/16, 100ep, 224x224, SSL) model in the SERE: Exploring Feature Self-relation for Self-supervised Transformer paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the MAE (ViT-B/16, 224x224, SSL, mmseg) model in the Masked Autoencoders Are Scalable Vision Learners paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the MAE (ViT-B/16, 224x224, SSL) model in the Masked Autoencoders Are Scalable Vision Learners paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the PASS (ResNet-50 D16, 224x224, LUSS) model in the Large-scale Unsupervised Semantic Segmentation paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the PASS (ResNet-50 D32, 224x224, LUSS) model in the Large-scale Unsupervised Semantic Segmentation paper on the ImageNet-S dataset? | mIoU (val), mIoU (test) |
What metrics were used to measure the FPN EfficientNet-B4 model in the dacl10k: Benchmark for Semantic Bridge Damage Segmentation paper on the dacl10k v1 testfinal dataset? | mIoU |
What metrics were used to measure the FoodSAM model in the FoodSAM: Any Food Segmentation paper on the FoodSeg103 dataset? | mIoU |
What metrics were used to measure the SeTR-MLA (ViT-16/B) model in the Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers paper on the FoodSeg103 dataset? | mIoU |
What metrics were used to measure the SeTR-Naive (ReLeM-ViT-16/B) model in the A Large-Scale Benchmark for Food Image Segmentation paper on the FoodSeg103 dataset? | mIoU |
What metrics were used to measure the Swin-Transformer (Swin-Small) model in the Swin Transformer: Hierarchical Vision Transformer using Shifted Windows paper on the FoodSeg103 dataset? | mIoU |
What metrics were used to measure the SeTR-Naive (ViT-16/B) model in the Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers paper on the FoodSeg103 dataset? | mIoU |
What metrics were used to measure the CCNet (ReLeM-ResNet-50) model in the A Large-Scale Benchmark for Food Image Segmentation paper on the FoodSeg103 dataset? | mIoU |
What metrics were used to measure the CCNet (ResNet-50) model in the CCNet: Criss-Cross Attention for Semantic Segmentation paper on the FoodSeg103 dataset? | mIoU |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Cyclists Easy dataset? | AP |
What metrics were used to measure the VoxelNet model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Cyclists Easy dataset? | AP |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Cyclists Moderate dataset? | AP |
What metrics were used to measure the VoxelNet model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Cyclists Moderate dataset? | AP |
What metrics were used to measure the Frustrum-PointPillars model in the Frustum-PointPillars: A Multi-Stage Approach for 3D Object Detection using RGB Camera and LiDAR paper on the KITTI Pedestrian Easy dataset? | AP |
What metrics were used to measure the Hausdorff Loss model in the Locating Objects Without Bounding Boxes paper on the Plant dataset? | F-Score |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Cars Moderate dataset? | AP |
What metrics were used to measure the VoxelNet model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Cars Moderate dataset? | AP |
What metrics were used to measure the Hausdorff Loss model in the Locating Objects Without Bounding Boxes paper on the Mall dataset? | Precision |
What metrics were used to measure the Hausdorff Loss model in the Locating Objects Without Bounding Boxes paper on the Pupil dataset? | Recall |
What metrics were used to measure the Frustrum-PointPillars model in the Frustum-PointPillars: A Multi-Stage Approach for 3D Object Detection using RGB Camera and LiDAR paper on the KITTI Pedestrians Hard dataset? | AP |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Pedestrians Hard dataset? | AP |
What metrics were used to measure the VoxelNet model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Pedestrians Hard dataset? | AP |
What metrics were used to measure the OSMaN model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the Shanks model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the CVPR22 model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the damm1 model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the 1637 model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the init. PREVALENT model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the Airbert model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the init. OSCAR model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the SIA model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the no init. OSCAR model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the eaq model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the REVERIE_Baseline model in the paper on the REVERIE dataset? | RGSPL, Nav-Succ, Nav-OSucc, Nav-SPL, Nav-Length, RGS |
What metrics were used to measure the VoxelNet model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Cars Easy dataset? | AP |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Cars Easy dataset? | AP |
What metrics were used to measure the VoxelNet model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Cars Hard dataset? | AP |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Cars Hard dataset? | AP |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Cyclists Hard dataset? | AP |
What metrics were used to measure the VoxelNe model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Cyclists Hard dataset? | AP |
What metrics were used to measure the ours model in the Co-localization with Category-Consistent Features and Geodesic Distance Propagation paper on the PASCAL VOC 2012 dataset? | CorLoc |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Pedestrians Easy dataset? | AP |
What metrics were used to measure the VoxelNet model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Pedestrians Easy dataset? | AP |
What metrics were used to measure the Unified-IOXL model in the Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks paper on the GRIT dataset? | Localization (ablation), Localization (test) |
What metrics were used to measure the GPV-2 model in the Webly Supervised Concept Expansion for General Purpose Vision Models paper on the GRIT dataset? | Localization (ablation), Localization (test) |
What metrics were used to measure the Mask R-CNN model in the Mask R-CNN paper on the GRIT dataset? | Localization (ablation), Localization (test) |
What metrics were used to measure the ours model in the Co-localization with Category-Consistent Features and Geodesic Distance Propagation paper on the PASCAL VOC 2007 dataset? | CorLoc |
What metrics were used to measure the Frustrum-PointPillars model in the Frustum-PointPillars: A Multi-Stage Approach for 3D Object Detection using RGB Camera and LiDAR paper on the KITTI Pedestrians Moderate dataset? | AP |
What metrics were used to measure the Frustum PointNets model in the Frustum PointNets for 3D Object Detection from RGB-D Data paper on the KITTI Pedestrians Moderate dataset? | AP |
What metrics were used to measure the VoxelNet model in the VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection paper on the KITTI Pedestrians Moderate dataset? | AP |
What metrics were used to measure the w2v2-L-robust-12 model in the Dawn of the transformer era in speech emotion recognition: closing the valence gap paper on the MSP-Podcast dataset? | Concordance correlation coefficient (CCC) |
What metrics were used to measure the 4D-aNN model in the 4D Attention-based Neural Network for EEG Emotion Recognition paper on the SEED dataset? | Accuracy |
What metrics were used to measure the Jukebox (Pre-training: CALM) model in the Codified audio language modeling learns useful representations for music information retrieval paper on the Emomusic dataset? | EmoA, EmoV |
What metrics were used to measure the CLMR (Pre-training: contrastive) model in the Codified audio language modeling learns useful representations for music information retrieval paper on the Emomusic dataset? | EmoA, EmoV |
What metrics were used to measure the BiHDM model in the A Novel Bi-hemispheric Discrepancy Model for EEG Emotion Recognition paper on the MPED dataset? | Accuracy |
What metrics were used to measure the LogisticRegression on posteriors of xlsr-Wav2Vec2.0&bi-LSTM+Attention model in the A proposal for Multimodal Emotion Recognition using aural transformers and Action Units on RAVDESS dataset paper on the RAVDESS dataset? | Accuracy |
What metrics were used to measure the Intermediate-Attention-Fusion model in the Self-attention fusion for audiovisual emotion recognition with incomplete data paper on the RAVDESS dataset? | Accuracy |
What metrics were used to measure the Logistic Regression on posteriors of the CNN-14&biLSTM-GuidedST model in the Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning paper on the RAVDESS dataset? | Accuracy |
What metrics were used to measure the ERANN-0-4 model in the ERANNs: Efficient Residual Audio Neural Networks for Audio Pattern Recognition paper on the RAVDESS dataset? | Accuracy |
What metrics were used to measure the HypLiLoc model in the HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion paper on the Oxford Radar RobotCar (Full-6) dataset? | Mean Translation Error |
What metrics were used to measure the PoseSOE model in the LiDAR-based localization using universal encoding and memory-aware regression paper on the Oxford Radar RobotCar (Full-6) dataset? | Mean Translation Error |
What metrics were used to measure the PosePN++ model in the LiDAR-based localization using universal encoding and memory-aware regression paper on the Oxford Radar RobotCar (Full-6) dataset? | Mean Translation Error |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.