prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the HBPN model in the Hierarchical Back Projection Network for Image Super-Resolution paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the DBPN-RES-MR64-3 model in the Deep Back-Projection Networks for Single Image Super-resolution paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the FACD model in the Feature-domain Adaptive Contrastive Distillation for Efficient Single Image Super-Resolution paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the SwinOIR model in the Resolution Enhancement Processing on Low Quality Images Using Swin Transformer Based on Interval Dense Connection Strategy paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the PMRN+ model in the Sequential Hierarchical Learning with Distribution Transformation for Image Super-Resolution paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the SRFBN model in the Feedback Network for Image Super-Resolution paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the MWCNN model in the Multi-level Wavelet-CNN for Image Restoration paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the IMDN model in the Lightweight Image Super-Resolution with Information Multi-distillation Network paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the SPBP-L+ model in the Sub-Pixel Back-Projection Network For Lightweight Single Image Super-Resolution paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the FALSR-A model in the Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the VDSR [[Kim et al.2016a]] model in the Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the DRCN [[Kim et al.2016b]] model in the Deeply-Recursive Convolutional Network for Image Super-Resolution paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the DnCNN-3 model in the Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising paper on the Urban100 - 2x upscaling dataset? | PSNR, SSIM |
What metrics were used to measure the SATNet model in the SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver paper on the Sudoku 9x9 dataset? | Accuracy |
What metrics were used to measure the F-SWA model in the Flat Seeking Bayesian Neural Networks paper on the cifar100 dataset? | Accuracy, Expected Calibration Error |
What metrics were used to measure the F-SWAG model in the Flat Seeking Bayesian Neural Networks paper on the cifar100 dataset? | Accuracy, Expected Calibration Error |
What metrics were used to measure the Baseline model in the Extensible Hierarchical Method of Detecting Interactive Actions for Video Understanding paper on the ActionNet-VE dataset? | F-measure (%) |
What metrics were used to measure the UniFormerV2-L model in the UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer paper on the HACS dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SRTG r(2+1)d-101 model in the Learn to cycle: Time-consistent feature discovery for action recognition paper on the HACS dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SRTG r(2+1)d-50 model in the Learn to cycle: Time-consistent feature discovery for action recognition paper on the HACS dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SRTG r3d-101 model in the Learn to cycle: Time-consistent feature discovery for action recognition paper on the HACS dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SRTG r(2+1)d-34 model in the Learn to cycle: Time-consistent feature discovery for action recognition paper on the HACS dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SRTG r3d-50 model in the Learn to cycle: Time-consistent feature discovery for action recognition paper on the HACS dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the SRTG r3d-34 model in the Learn to cycle: Time-consistent feature discovery for action recognition paper on the HACS dataset? | Top 1 Accuracy, Top 5 Accuracy |
What metrics were used to measure the AIM (CLIP ViT-L/14, 32x224) model in the AIM: Adapting Image Models for Efficient Video Action Recognition paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the TFCNet model in the TFCNet: Temporal Fully Connected Networks for Static Unbiased Temporal Reasoning paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the ORViT TimeSformer model in the Object-Region Video Transformers paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the GC-TDN model in the Group Contextualization for Video Recognition paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the BEVT model in the BEVT: BERT Pretraining of Video Transformers paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the VIMPAC model in the VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the RSANet-R50 (16 frames, ImageNet pretrained, a single clip) model in the Relational Self-Attention: What's Missing in Attention for Video Understanding paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the PMI Sampler model in the PMI Sampler: Patch Similarity Guided Frame Selection for Aerial Action Recognition paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the TimeSformer-L model in the Is Space-Time Attention All You Need for Video Understanding? paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the TimeSformer-HR model in the Is Space-Time Attention All You Need for Video Understanding? paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the SlowFast model in the SlowFast Networks for Video Recognition paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the TimeSformer model in the Is Space-Time Attention All You Need for Video Understanding? paper on the Diving-48 dataset? | Accuracy |
What metrics were used to measure the 🦩 Flamingo model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the RareAct dataset? | mWAP |
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the RareAct dataset? | mWAP |
What metrics were used to measure the HT100M S3D model in the End-to-End Learning of Visual Representations from Uncurated Instructional Videos paper on the RareAct dataset? | mWAP |
What metrics were used to measure the STAR/L model in the End-to-End Spatio-Temporal Action Localisation with Video Transformers paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the ACAR-Net, SlowFast R-101 (Kinetics-400 pretraining) model in the Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the JMRN + SlowFast-R101-NL model in the Pose And Joint-Aware Action Recognition paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the SlowFast++ (Kinetics-600 pretraining, NL) model in the SlowFast Networks for Video Recognition paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the LFB (Kinetics-400 pretraining) model in the Long-Term Feature Banks for Detailed Video Understanding paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the I3D Tx HighRes model in the Video Action Transformer Network paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the SlowFast (Kinetics-600 pretraining, NL) model in the SlowFast Networks for Video Recognition paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the SlowFast (Kinetics-600 pretraining) model in the SlowFast Networks for Video Recognition paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the SlowFast (Kinetics-400 pretraining) model in the SlowFast Networks for Video Recognition paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the I3D I3D model in the Video Action Transformer Network paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the D3D (ResNet RPN, Kinetics-400 pretraining) model in the D3D: Distilled 3D Networks for Video Action Recognition paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the I3D w/ RPN + JFT (Kinetics-400 pretraining( model in the A Better Baseline for AVA paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the S3D-G w/ ResNet RPN (Kinetics-400 pretraining( model in the AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the I3D w/ RPN (Kinetics-400 pretraining( model in the A Better Baseline for AVA paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the ARCN model in the Actor-Centric Relation Network paper on the AVA v2.1 dataset? | mAP (Val), GFlops, Params (M) |
What metrics were used to measure the BMN model in the BMN: Boundary-Matching Network for Temporal Action Proposal Generation paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the MGG UNet model in the Multi-granularity Generator for Temporal Action Proposal paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the BSN model in the BSN: Boundary Sensitive Network for Temporal Action Proposal Generation paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the Two-stream R-C3D (Sum) + OHEM model in the Two-Stream Region Convolutional 3D Network for Temporal Activity Detection paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the Single-stream R-C3D + OHEM model in the Two-Stream Region Convolutional 3D Network for Temporal Activity Detection paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the SSN model in the Temporal Action Detection with Structured Segment Networks paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the Dai et. al. model in the Temporal Context Network for Activity Localization in Videos paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the TURN model in the TURN TAP: Temporal Unit Regression Network for Temporal Action Proposals paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the Shou et. al. model in the Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the Yeung et. al. model in the End-to-end Learning of Action Detection from Frame Glimpses in Videos paper on the THUMOS’14 dataset? | mAP@0.5, mAP@0.1, mAP@0.2, mAP@0.3, mAP@0.4 |
What metrics were used to measure the VideoMAE V2-g model in the VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the DEEP-HAL with ODF+SDF(I3D) model in the Self-supervising Action Recognition by Statistical Moment and Subspace Descriptors paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the TO+MaxExp+IDT model in the High-order Tensor Pooling with Attention for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the SCK⊕(I3D)+IDT model in the Tensor Representations for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the SO+MaxExp+IDT model in the High-order Tensor Pooling with Attention for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R2+1D-BERT model in the Late Temporal Modeling in 3D CNN Architectures with BERT for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Ours + ResNext101 BERT model in the Pose And Joint-Aware Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the SMART model in the SMART Frame Selection for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the BIKE model in the Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the OmniSource (SlowOnly-8x8-R101-RGB + I3D Flow) model in the Omni-sourced Webly-supervised Learning for Video Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the ZeroI2V ViT-L/14 model in the ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the PERF-Net (distilled S3D-G) model in the PERF-Net: Pose Empowered RGB-Flow Net paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the BubbleNET model in the Bubblenet: A Disperse Recurrent Structure To Recognize Activities paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the HAF+BoW/FV halluc model in the Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the CCS + TSN (ImageNet+Kinetics pretrained) model in the Cooperative Cross-Stream Network for Discriminative Action Representation paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the RepFlow-50 ([2+1]D CNN, FcF, Non-local block) model in the Representation Flow for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Multi-stream I3D model in the Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the MARS+RGB+FLow (64 frames, Kinetics pretrained) model in the MARS: Motion-Augmented RGB Stream for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Two-stream I3D model in the Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Two-Stream I3D (Imagenet+Kinetics pre-training) model in the Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the LGD-3D Two-stream model in the Learning Spatio-Temporal Representation with Local and Global Diffusion paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the D3D + D3D model in the D3D: Distilled 3D Networks for Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the D3D (Kinetics-600 pretraining) model in the D3D: Distilled 3D Networks for Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the LGD-3D Flow model in the Learning Spatio-Temporal Representation with Local and Global Diffusion paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Hidden Two-Stream model in the Hidden Two-Stream Convolutional Networks for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R[2+1]D-TwoStream (Kinetics pretrained) model in the A Closer Look at Spatiotemporal Convolutions for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the D3D (Kinetics-400 pretraining) model in the D3D: Distilled 3D Networks for Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the I3D RGB + DMC-Net (I3D) model in the DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the BQN model in the Busy-Quiet Video Disentangling for Video Classification paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the MSNet-R50 (16 frames, ImageNet pretrained) model in the MotionSqueeze: Neural Motion Feature Learning for Video Understanding paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Flow-I3D (Kinetics pre-training) model in the Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the Flow-I3D (Imagenet+Kinetics pre-training) model in the Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the HATNet (32 frames) model in the Large Scale Holistic Video Understanding paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the R[2+1]D-Flow (Kinetics pretrained) model in the A Closer Look at Spatiotemporal Convolutions for Action Recognition paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the S3D-G (ImageNet, Kinetics-400 pretrained) model in the Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
What metrics were used to measure the FASTER32 (Kinetics pretrain) model in the FASTER Recurrent Networks for Efficient Video Classification paper on the HMDB-51 dataset? | Average accuracy of 3 splits |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.