prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Skylight at AI2, 4th place xView3 prize challenge model in the xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture Radar Imagery paper on the xView3-SAR dataset? | Aggregate xView3 Score |
What metrics were used to measure the Kohei, 5th place xView3 prize challenge model in the xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture Radar Imagery paper on the xView3-SAR dataset? | Aggregate xView3 Score |
What metrics were used to measure the SCI3D model in the Deep set conditioned latent representations for action recognition paper on the CATER dataset? | Average-mAP |
What metrics were used to measure the R3D-NL model in the Deep set conditioned latent representations for action recognition paper on the CATER dataset? | Average-mAP |
What metrics were used to measure the Single stream SCI3D model in the Deep set conditioned latent representations for action recognition paper on the CATER dataset? | Average-mAP |
What metrics were used to measure the FasterRCNN model in the Deep set conditioned latent representations for action recognition paper on the CATER dataset? | Average-mAP |
What metrics were used to measure the Waveformer model in the Real-Time Target Sound Extraction paper on the FSDSoundScapes dataset? | SI-SNRi |
What metrics were used to measure the STM (ImageNet+Kinetics pretrain) model in the STM: SpatioTemporal and Motion Encoding for Action Recognition paper on the UCF101 dataset? | 3-fold Accuracy |
What metrics were used to measure the 3D-SqueezeNet model in the Resource Efficient 3D Convolutional Neural Networks paper on the UCF101 dataset? | 3-fold Accuracy |
What metrics were used to measure the 3D-ShuffleNetV2 0.25x model in the Resource Efficient 3D Convolutional Neural Networks paper on the UCF101 dataset? | 3-fold Accuracy |
What metrics were used to measure the 3D-MobileNetV2 0.2x model in the Resource Efficient 3D Convolutional Neural Networks paper on the UCF101 dataset? | 3-fold Accuracy |
What metrics were used to measure the Baseline UCF101 model in the UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild paper on the UCF101 dataset? | 3-fold Accuracy |
What metrics were used to measure the 2D-3D-Softargmax (RGB only) model in the 2D/3D Pose Estimation and Action Recognition using Multitask Deep Learning paper on the NTU RGB+D dataset? | Accuracy (CS) |
What metrics were used to measure the STM (16 frames, ImageNet pretraining) model in the STM: SpatioTemporal and Motion Encoding for Action Recognition paper on the Something-Something V1 dataset? | Top 1 Accuracy |
What metrics were used to measure the Motion Feature Net model in the Motion Feature Network: Fixed Motion Filter for Action Recognition paper on the Something-Something V1 dataset? | Top 1 Accuracy |
What metrics were used to measure the 2-Stream TRN model in the Temporal Relational Reasoning in Videos paper on the Something-Something V1 dataset? | Top 1 Accuracy |
What metrics were used to measure the MMNet model in the MMNet: A Model-Based Multimodal Network for Human Action Recognition in RGB-D Videos paper on the PKU-MMD dataset? | X-Sub, X-View |
What metrics were used to measure the TSMF model in the Multimodal Fusion via Teacher-Student Network for Indoor Action Recognition paper on the PKU-MMD dataset? | X-Sub, X-View |
What metrics were used to measure the STM (ImageNet+Kinetics pretrain) model in the STM: SpatioTemporal and Motion Encoding for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Florence model in the Florence: A New Foundation Model for Computer Vision paper on the Kinetics-400 dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the ActionCLIP (ViT-B/16) model in the ActionCLIP: A New Paradigm for Video Action Recognition paper on the Kinetics-400 dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the Frozen Backbone, SwinV2-G-ext22K (Video-Swin) model in the Could Giant Pretrained Image Models Extract Universal Representations? paper on the Kinetics-400 dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the LSTM + Pretrained on YT-8M model in the YouTube-8M: A Large-Scale Video Classification Benchmark paper on the ActivityNet dataset? | mAP |
What metrics were used to measure the ITANet model in the Learning Implicit Temporal Alignment for Few-shot Video Classification paper on the FS-Something-Something V2-Small dataset? | Top-1 Accuracy(5-Way-1-Shot), Top-1 Accuracy(5-Way-5-Shot) |
What metrics were used to measure the CMN[35] model in the Learning Implicit Temporal Alignment for Few-shot Video Classification paper on the FS-Something-Something V2-Small dataset? | Top-1 Accuracy(5-Way-1-Shot), Top-1 Accuracy(5-Way-5-Shot) |
What metrics were used to measure the ITANet model in the Learning Implicit Temporal Alignment for Few-shot Video Classification paper on the FS-Something-Something V2-Full dataset? | Top-1 Accuracy(5-Way-1-Shot), Top-1 Accuracy(5-Way-5-Shot) |
What metrics were used to measure the OTAM[3]++ model in the Learning Implicit Temporal Alignment for Few-shot Video Classification paper on the FS-Something-Something V2-Full dataset? | Top-1 Accuracy(5-Way-1-Shot), Top-1 Accuracy(5-Way-5-Shot) |
What metrics were used to measure the Florence model in the Florence: A New Foundation Model for Computer Vision paper on the Kinetics-600 dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the Single-stream R-C3D (two-way buffer) model in the R-C3D: Region Convolutional 3D Network for Temporal Activity Detection paper on the THUMOS’14 dataset? | mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4, mAP@0.5 |
What metrics were used to measure the Single-stream R-C3D (one-way buffer) model in the R-C3D: Region Convolutional 3D Network for Temporal Activity Detection paper on the THUMOS’14 dataset? | mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4, mAP@0.5 |
What metrics were used to measure the G-Blend model in the What Makes Training Multi-Modal Classification Networks Hard? paper on the miniSports dataset? | Clip Hit@1, Video hit@1, Video hit@5 |
What metrics were used to measure the STM (16 frames, ImageNet pretraining) model in the STM: SpatioTemporal and Motion Encoding for Action Recognition paper on the Something-Something V2 dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the CPNet Res34, 5 CP model in the Learning Video Representations from Correspondence Proposals paper on the Something-Something V2 dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the 2-Stream TRN model in the Temporal Relational Reasoning in Videos paper on the Something-Something V2 dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the DIN model in the DenseImage Network: Video Spatial-Temporal Evolution Encoding and Understanding paper on the Something-Something V2 dataset? | Top-1 Accuracy, Top-5 Accuracy |
What metrics were used to measure the YOWO+LFB* model in the You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization paper on the AVA v2.2 dataset? | mAP (Val) |
What metrics were used to measure the CPNet Res34, 5 CP model in the Learning Video Representations from Correspondence Proposals paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the STM (Resnet-50, 16 frames) model in the STM: SpatioTemporal and Motion Encoding for Action Recognition paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the MFNet model in the Motion Feature Network: Fixed Motion Filter for Action Recognition paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the MultiScale TRN model in the Temporal Relational Reasoning in Videos paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the DIN model in the DenseImage Network: Video Spatial-Temporal Evolution Encoding and Understanding paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the convSTAR model in the Gating Revisited: Deep Multi-layer RNNs That Can Be Trained paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the 3D-SqueezeNet model in the Resource Efficient 3D Convolutional Neural Networks paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the 3D-ShuffleNetV2 0.25x model in the Resource Efficient 3D Convolutional Neural Networks paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the 3D-MobileNetV2 0.2x model in the Resource Efficient 3D Convolutional Neural Networks paper on the Jester (Gesture Recognition) dataset? | Val |
What metrics were used to measure the G-Blend model in the What Makes Training Multi-Modal Classification Networks Hard? paper on the Sports-1M dataset? | Video hit@1, Video hit@5 |
What metrics were used to measure the LSTM +Pretrained on YT-8M model in the YouTube-8M: A Large-Scale Video Classification Benchmark paper on the Sports-1M dataset? | Video hit@1, Video hit@5 |
What metrics were used to measure the YOWO+LFB* model in the You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization paper on the AVA v2.1 dataset? | mAP (Val) |
What metrics were used to measure the LayoutMask (large) model in the LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding paper on the FUNSD dataset? | F1 |
What metrics were used to measure the ERNIE-Layoutlarge model in the ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding paper on the FUNSD dataset? | F1 |
What metrics were used to measure the LayoutMask (base) model in the LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding paper on the FUNSD dataset? | F1 |
What metrics were used to measure the GeoLayoutLM model in the GeoLayoutLM: Geometric Pre-training for Visual Information Extraction paper on the FUNSD dataset? | F1 |
What metrics were used to measure the LayoutLMv3 Large model in the LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking paper on the FUNSD dataset? | F1 |
What metrics were used to measure the StrucTexTv2 (large) model in the StrucTexTv2: Masked Visual-Textual Prediction for Document Image Pre-training paper on the FUNSD dataset? | F1 |
What metrics were used to measure the XDoc1M model in the XDoc: Unified Pre-training for Cross-Format Document Understanding paper on the FUNSD dataset? | F1 |
What metrics were used to measure the StrucTexTv2 (small) model in the StrucTexTv2: Masked Visual-Textual Prediction for Document Image Pre-training paper on the FUNSD dataset? | F1 |
What metrics were used to measure the LILT model in the LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding paper on the FUNSD dataset? | F1 |
What metrics were used to measure the LayoutLMv2LARGE model in the LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding paper on the FUNSD dataset? | F1 |
What metrics were used to measure the DocTr model in the DocTr: Document Transformer for Structured Information Extraction in Documents paper on the FUNSD dataset? | F1 |
What metrics were used to measure the LayoutLMv2BASE model in the LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding paper on the FUNSD dataset? | F1 |
What metrics were used to measure the Doc2Graph model in the Doc2Graph: a Task Agnostic Document Understanding Framework based on Graph Neural Networks paper on the FUNSD dataset? | F1 |
What metrics were used to measure the InternVideo model in the InternVideo-Ego4D: A Pack of Champion Solutions to Ego4D Challenges paper on the Ego4D dataset? | AP, AP50, AP75 |
What metrics were used to measure the GANOv2 model in the Guided Attention for Next Active Object @ EGO4D STA Challenge paper on the Ego4D dataset? | Overall (Top5 mAP), Noun (Top5 mAP), Noun+Verb(Top5 mAP), Noun+TTC (Top5 mAP) |
What metrics were used to measure the InternVideo model in the InternVideo-Ego4D: A Pack of Champion Solutions to Ego4D Challenges paper on the Ego4D dataset? | Overall (Top5 mAP), Noun (Top5 mAP), Noun+Verb(Top5 mAP), Noun+TTC (Top5 mAP) |
What metrics were used to measure the InternVideo model in the InternVideo-Ego4D: A Pack of Champion Solutions to Ego4D Challenges paper on the Ego4D dataset? | Disp(Total), M.Disp(Left), C.Disp(Left), M.Disp(Right), C.Disp(Right) |
What metrics were used to measure the MVP model in the MTTN: Multi-Pair Text to Text Narratives for Prompt Generation paper on the MTTN: Multi-Pair Text to Text Narratives for Prompt Generation dataset? | ROUGE-1 |
What metrics were used to measure the BART model in the MTTN: Multi-Pair Text to Text Narratives for Prompt Generation paper on the MTTN: Multi-Pair Text to Text Narratives for Prompt Generation dataset? | ROUGE-1 |
What metrics were used to measure the T5 model in the MTTN: Multi-Pair Text to Text Narratives for Prompt Generation paper on the MTTN: Multi-Pair Text to Text Narratives for Prompt Generation dataset? | ROUGE-1 |
What metrics were used to measure the RAMS (ours) model in the Multi-image Super Resolution of Remotely Sensed Images using Residual Feature Attention Deep Neural Networks paper on the Ultra Video Group HD - 4x upscaling dataset? | Average PSNR |
What metrics were used to measure the DeepSUM[41] model in the Multi-image Super Resolution of Remotely Sensed Images using Residual Feature Attention Deep Neural Networks paper on the Ultra Video Group HD - 4x upscaling dataset? | Average PSNR |
What metrics were used to measure the ESPCN model in the Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network paper on the Ultra Video Group HD - 4x upscaling dataset? | Average PSNR |
What metrics were used to measure the SRCNN model in the Image Super-Resolution Using Deep Convolutional Networks paper on the Ultra Video Group HD - 4x upscaling dataset? | Average PSNR |
What metrics were used to measure the bicubic model in the Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network paper on the Ultra Video Group HD - 4x upscaling dataset? | Average PSNR |
What metrics were used to measure the VRT model in the VRT: A Video Restoration Transformer paper on the UDM10 - 4x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the RVRT model in the Recurrent Video Restoration Transformer with Guided Deformable Attention paper on the UDM10 - 4x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the BasicVSR++ model in the BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment paper on the UDM10 - 4x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the TTVSR model in the Learning Trajectory-Aware Transformer for Video Super-Resolution paper on the UDM10 - 4x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the IconVSR model in the BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond paper on the UDM10 - 4x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the BasicVSR model in the BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond paper on the UDM10 - 4x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the RRN-L model in the Revisiting Temporal Modeling for Video Super-resolution paper on the UDM10 - 4x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the DeFMO model in the DeFMO: Deblurring and Shape Recovery of Fast Moving Objects paper on the Falling Objects dataset? | SSIM, PSNR, TIoU |
What metrics were used to measure the TbD-3D model in the Sub-frame Appearance and 6D Pose Estimation of Fast Moving Objects paper on the Falling Objects dataset? | SSIM, PSNR, TIoU |
What metrics were used to measure the TbD model in the Intra-frame Object Tracking by Deblatting paper on the Falling Objects dataset? | SSIM, PSNR, TIoU |
What metrics were used to measure the DeFMO model in the DeFMO: Deblurring and Shape Recovery of Fast Moving Objects paper on the TbD-3D dataset? | SSIM, PSNR, TIoU |
What metrics were used to measure the TbD-3D model in the Sub-frame Appearance and 6D Pose Estimation of Fast Moving Objects paper on the TbD-3D dataset? | SSIM, PSNR, TIoU |
What metrics were used to measure the TbD model in the Intra-frame Object Tracking by Deblatting paper on the TbD-3D dataset? | SSIM, PSNR, TIoU |
What metrics were used to measure the RTA-Vimeo-90K model in the Revisiting Temporal Alignment for Video Restoration paper on the Vimeo-90K dataset? | Average PSNR |
What metrics were used to measure the ESPCN model in the Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network paper on the Xiph HD - 4x upscaling dataset? | Average PSNR |
What metrics were used to measure the SRCNN model in the Image Super-Resolution Using Deep Convolutional Networks paper on the Xiph HD - 4x upscaling dataset? | Average PSNR |
What metrics were used to measure the bicubic model in the Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network paper on the Xiph HD - 4x upscaling dataset? | Average PSNR |
What metrics were used to measure the VRT model in the VRT: A Video Restoration Transformer paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the BasicVSR model in the BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the RBPN model in the Recurrent Back-Projection Network for Video Super-Resolution paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the DBVSR model in the Deep Blind Video Super-resolution paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the iSeeBetter model in the iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the LGFN model in the Local-Global Fusion Network for Video Super-Resolution paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the DynaVSR-R model in the DynaVSR: Dynamic Adaptive Blind Video Super-Resolution paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the TMNet model in the Temporal Modulation Network for Controllable Space-Time Video Super-Resolution paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the COMISR model in the COMISR: Compression-Informed Video Super-Resolution paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
What metrics were used to measure the RSDN model in the Video Super-Resolution with Recurrent Structure-Detail Network paper on the MSU Video Super Resolution Benchmark: Detail Restoration dataset? | Subjective score, ERQAv1.0, 1 - LPIPS, SSIM, QRCRv1.0, PSNR, FPS |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.